00:00:00.001 Started by upstream project "autotest-nightly" build number 4155 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3517 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.022 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.023 The recommended git tool is: git 00:00:00.023 using credential 00000000-0000-0000-0000-000000000002 00:00:00.025 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.052 Fetching changes from the remote Git repository 00:00:00.054 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.087 Using shallow fetch with depth 1 00:00:00.087 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.087 > git --version # timeout=10 00:00:00.112 > git --version # 'git version 2.39.2' 00:00:00.112 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.180 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.180 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.563 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.574 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.584 Checking out Revision f95f9907808933a1db7196e15e13478e0f322ee7 (FETCH_HEAD) 00:00:07.584 > git config core.sparsecheckout # timeout=10 00:00:07.594 > git read-tree -mu HEAD # timeout=10 00:00:07.609 > git checkout -f f95f9907808933a1db7196e15e13478e0f322ee7 # timeout=5 00:00:07.626 Commit message: "Revert "autotest-phy: replace deprecated label for nvmf-cvl"" 00:00:07.626 > git rev-list --no-walk f95f9907808933a1db7196e15e13478e0f322ee7 # timeout=10 00:00:07.738 [Pipeline] Start of Pipeline 00:00:07.747 [Pipeline] library 00:00:07.748 Loading library shm_lib@master 00:00:07.749 Library shm_lib@master is cached. Copying from home. 00:00:07.766 [Pipeline] node 00:00:07.775 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.777 [Pipeline] { 00:00:07.788 [Pipeline] catchError 00:00:07.789 [Pipeline] { 00:00:07.801 [Pipeline] wrap 00:00:07.808 [Pipeline] { 00:00:07.814 [Pipeline] stage 00:00:07.815 [Pipeline] { (Prologue) 00:00:08.057 [Pipeline] sh 00:00:08.342 + logger -p user.info -t JENKINS-CI 00:00:08.361 [Pipeline] echo 00:00:08.363 Node: CYP12 00:00:08.368 [Pipeline] sh 00:00:08.667 [Pipeline] setCustomBuildProperty 00:00:08.677 [Pipeline] echo 00:00:08.678 Cleanup processes 00:00:08.683 [Pipeline] sh 00:00:08.970 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.970 2672475 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.983 [Pipeline] sh 00:00:09.274 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.274 ++ grep -v 'sudo pgrep' 00:00:09.274 ++ awk '{print $1}' 00:00:09.274 + sudo kill -9 00:00:09.274 + true 00:00:09.288 [Pipeline] cleanWs 00:00:09.298 [WS-CLEANUP] Deleting project workspace... 00:00:09.298 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.304 [WS-CLEANUP] done 00:00:09.308 [Pipeline] setCustomBuildProperty 00:00:09.321 [Pipeline] sh 00:00:09.607 + sudo git config --global --replace-all safe.directory '*' 00:00:09.711 [Pipeline] httpRequest 00:00:10.073 [Pipeline] echo 00:00:10.074 Sorcerer 10.211.164.101 is alive 00:00:10.081 [Pipeline] retry 00:00:10.082 [Pipeline] { 00:00:10.092 [Pipeline] httpRequest 00:00:10.096 HttpMethod: GET 00:00:10.096 URL: http://10.211.164.101/packages/jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:00:10.097 Sending request to url: http://10.211.164.101/packages/jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:00:10.120 Response Code: HTTP/1.1 200 OK 00:00:10.120 Success: Status code 200 is in the accepted range: 200,404 00:00:10.120 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:00:30.351 [Pipeline] } 00:00:30.368 [Pipeline] // retry 00:00:30.376 [Pipeline] sh 00:00:30.665 + tar --no-same-owner -xf jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:00:30.682 [Pipeline] httpRequest 00:00:31.052 [Pipeline] echo 00:00:31.054 Sorcerer 10.211.164.101 is alive 00:00:31.064 [Pipeline] retry 00:00:31.066 [Pipeline] { 00:00:31.079 [Pipeline] httpRequest 00:00:31.084 HttpMethod: GET 00:00:31.085 URL: http://10.211.164.101/packages/spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:00:31.085 Sending request to url: http://10.211.164.101/packages/spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:00:31.093 Response Code: HTTP/1.1 200 OK 00:00:31.094 Success: Status code 200 is in the accepted range: 200,404 00:00:31.094 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:03:41.279 [Pipeline] } 00:03:41.296 [Pipeline] // retry 00:03:41.302 [Pipeline] sh 00:03:41.637 + tar --no-same-owner -xf spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:03:44.194 [Pipeline] sh 00:03:44.481 + git -C spdk log --oneline -n5 00:03:44.481 3950cd1bb bdev/nvme: Change spdk_bdev_reset() to succeed if at least one nvme_ctrlr is reconnected 00:03:44.481 f9141d271 test/blob: Add BLOCKLEN macro in blob_ut 00:03:44.481 82c46626a lib/event: implement scheduler trace events 00:03:44.481 fa6aec495 lib/thread: register thread owner type for scheduler trace events 00:03:44.481 1876d41a3 include/spdk_internal: define scheduler tracegroup and tracepoints 00:03:44.492 [Pipeline] } 00:03:44.506 [Pipeline] // stage 00:03:44.514 [Pipeline] stage 00:03:44.516 [Pipeline] { (Prepare) 00:03:44.530 [Pipeline] writeFile 00:03:44.544 [Pipeline] sh 00:03:44.829 + logger -p user.info -t JENKINS-CI 00:03:44.841 [Pipeline] sh 00:03:45.126 + logger -p user.info -t JENKINS-CI 00:03:45.137 [Pipeline] sh 00:03:45.423 + cat autorun-spdk.conf 00:03:45.423 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:45.423 SPDK_TEST_NVMF=1 00:03:45.423 SPDK_TEST_NVME_CLI=1 00:03:45.423 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:45.423 SPDK_TEST_NVMF_NICS=e810 00:03:45.423 SPDK_RUN_ASAN=1 00:03:45.423 SPDK_RUN_UBSAN=1 00:03:45.423 NET_TYPE=phy 00:03:45.431 RUN_NIGHTLY=1 00:03:45.435 [Pipeline] readFile 00:03:45.458 [Pipeline] withEnv 00:03:45.460 [Pipeline] { 00:03:45.471 [Pipeline] sh 00:03:45.757 + set -ex 00:03:45.757 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:45.757 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:45.757 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:45.757 ++ SPDK_TEST_NVMF=1 00:03:45.757 ++ SPDK_TEST_NVME_CLI=1 00:03:45.757 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:45.757 ++ SPDK_TEST_NVMF_NICS=e810 00:03:45.757 ++ SPDK_RUN_ASAN=1 00:03:45.757 ++ SPDK_RUN_UBSAN=1 00:03:45.757 ++ NET_TYPE=phy 00:03:45.757 ++ RUN_NIGHTLY=1 00:03:45.757 + case $SPDK_TEST_NVMF_NICS in 00:03:45.757 + DRIVERS=ice 00:03:45.757 + [[ tcp == \r\d\m\a ]] 00:03:45.757 + [[ -n ice ]] 00:03:45.757 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:45.757 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:45.757 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:45.757 rmmod: ERROR: Module irdma is not currently loaded 00:03:45.757 rmmod: ERROR: Module i40iw is not currently loaded 00:03:45.757 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:45.757 + true 00:03:45.757 + for D in $DRIVERS 00:03:45.757 + sudo modprobe ice 00:03:45.757 + exit 0 00:03:45.766 [Pipeline] } 00:03:45.780 [Pipeline] // withEnv 00:03:45.784 [Pipeline] } 00:03:45.797 [Pipeline] // stage 00:03:45.804 [Pipeline] catchError 00:03:45.805 [Pipeline] { 00:03:45.817 [Pipeline] timeout 00:03:45.817 Timeout set to expire in 1 hr 0 min 00:03:45.818 [Pipeline] { 00:03:45.831 [Pipeline] stage 00:03:45.833 [Pipeline] { (Tests) 00:03:45.845 [Pipeline] sh 00:03:46.134 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:46.134 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:46.134 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:46.134 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:46.134 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:46.134 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:46.134 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:46.134 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:46.134 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:46.134 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:46.134 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:46.134 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:46.134 + source /etc/os-release 00:03:46.134 ++ NAME='Fedora Linux' 00:03:46.134 ++ VERSION='39 (Cloud Edition)' 00:03:46.134 ++ ID=fedora 00:03:46.134 ++ VERSION_ID=39 00:03:46.134 ++ VERSION_CODENAME= 00:03:46.134 ++ PLATFORM_ID=platform:f39 00:03:46.134 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:46.134 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:46.134 ++ LOGO=fedora-logo-icon 00:03:46.134 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:46.134 ++ HOME_URL=https://fedoraproject.org/ 00:03:46.134 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:46.134 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:46.134 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:46.134 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:46.134 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:46.134 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:46.134 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:46.134 ++ SUPPORT_END=2024-11-12 00:03:46.134 ++ VARIANT='Cloud Edition' 00:03:46.134 ++ VARIANT_ID=cloud 00:03:46.134 + uname -a 00:03:46.134 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:46.134 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:48.677 Hugepages 00:03:48.677 node hugesize free / total 00:03:48.677 node0 1048576kB 0 / 0 00:03:48.677 node0 2048kB 0 / 0 00:03:48.677 node1 1048576kB 0 / 0 00:03:48.677 node1 2048kB 0 / 0 00:03:48.677 00:03:48.677 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:48.677 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:48.677 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:48.677 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:48.677 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:48.677 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:48.677 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:48.677 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:48.677 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:48.938 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:48.938 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:48.938 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:48.938 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:48.938 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:48.938 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:48.938 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:48.938 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:48.938 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:48.938 + rm -f /tmp/spdk-ld-path 00:03:48.938 + source autorun-spdk.conf 00:03:48.938 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:48.938 ++ SPDK_TEST_NVMF=1 00:03:48.938 ++ SPDK_TEST_NVME_CLI=1 00:03:48.938 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:48.938 ++ SPDK_TEST_NVMF_NICS=e810 00:03:48.938 ++ SPDK_RUN_ASAN=1 00:03:48.938 ++ SPDK_RUN_UBSAN=1 00:03:48.938 ++ NET_TYPE=phy 00:03:48.938 ++ RUN_NIGHTLY=1 00:03:48.938 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:48.938 + [[ -n '' ]] 00:03:48.938 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:48.938 + for M in /var/spdk/build-*-manifest.txt 00:03:48.938 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:48.938 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:48.938 + for M in /var/spdk/build-*-manifest.txt 00:03:48.938 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:48.938 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:48.939 + for M in /var/spdk/build-*-manifest.txt 00:03:48.939 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:48.939 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:48.939 ++ uname 00:03:48.939 + [[ Linux == \L\i\n\u\x ]] 00:03:48.939 + sudo dmesg -T 00:03:48.939 + sudo dmesg --clear 00:03:48.939 + dmesg_pid=2674060 00:03:48.939 + [[ Fedora Linux == FreeBSD ]] 00:03:48.939 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:48.939 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:48.939 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:48.939 + [[ -x /usr/src/fio-static/fio ]] 00:03:48.939 + export FIO_BIN=/usr/src/fio-static/fio 00:03:48.939 + FIO_BIN=/usr/src/fio-static/fio 00:03:48.939 + sudo dmesg -Tw 00:03:48.939 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:48.939 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:48.939 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:48.939 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:48.939 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:48.939 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:48.939 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:48.939 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:48.939 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:48.939 Test configuration: 00:03:48.939 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:48.939 SPDK_TEST_NVMF=1 00:03:48.939 SPDK_TEST_NVME_CLI=1 00:03:48.939 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:48.939 SPDK_TEST_NVMF_NICS=e810 00:03:48.939 SPDK_RUN_ASAN=1 00:03:48.939 SPDK_RUN_UBSAN=1 00:03:48.939 NET_TYPE=phy 00:03:48.939 RUN_NIGHTLY=1 14:14:12 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:03:48.939 14:14:12 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:48.939 14:14:12 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:48.939 14:14:12 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:48.939 14:14:12 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:48.939 14:14:12 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:48.939 14:14:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.939 14:14:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.939 14:14:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.939 14:14:12 -- paths/export.sh@5 -- $ export PATH 00:03:48.939 14:14:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.939 14:14:12 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:48.939 14:14:12 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:48.939 14:14:12 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728303252.XXXXXX 00:03:48.939 14:14:12 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728303252.VBq0Tp 00:03:48.939 14:14:12 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:48.939 14:14:12 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:03:48.939 14:14:12 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:48.939 14:14:12 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:48.939 14:14:12 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:48.939 14:14:12 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:48.939 14:14:12 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:48.939 14:14:12 -- common/autotest_common.sh@10 -- $ set +x 00:03:49.199 14:14:12 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:03:49.199 14:14:12 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:49.199 14:14:12 -- pm/common@17 -- $ local monitor 00:03:49.199 14:14:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.199 14:14:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.199 14:14:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.199 14:14:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.199 14:14:12 -- pm/common@25 -- $ sleep 1 00:03:49.199 14:14:12 -- pm/common@21 -- $ date +%s 00:03:49.199 14:14:12 -- pm/common@21 -- $ date +%s 00:03:49.199 14:14:12 -- pm/common@21 -- $ date +%s 00:03:49.199 14:14:12 -- pm/common@21 -- $ date +%s 00:03:49.199 14:14:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728303252 00:03:49.199 14:14:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728303252 00:03:49.199 14:14:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728303252 00:03:49.199 14:14:12 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728303252 00:03:49.199 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728303252_collect-vmstat.pm.log 00:03:49.199 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728303252_collect-cpu-load.pm.log 00:03:49.199 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728303252_collect-cpu-temp.pm.log 00:03:49.199 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728303252_collect-bmc-pm.bmc.pm.log 00:03:50.142 14:14:13 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:50.142 14:14:13 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:50.142 14:14:13 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:50.142 14:14:13 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:50.142 14:14:13 -- spdk/autobuild.sh@16 -- $ date -u 00:03:50.142 Mon Oct 7 12:14:13 PM UTC 2024 00:03:50.142 14:14:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:50.142 v25.01-pre-35-g3950cd1bb 00:03:50.142 14:14:13 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:50.142 14:14:13 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:50.142 14:14:13 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:50.142 14:14:13 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:50.142 14:14:13 -- common/autotest_common.sh@10 -- $ set +x 00:03:50.142 ************************************ 00:03:50.142 START TEST asan 00:03:50.142 ************************************ 00:03:50.142 14:14:13 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:03:50.142 using asan 00:03:50.142 00:03:50.142 real 0m0.001s 00:03:50.142 user 0m0.001s 00:03:50.142 sys 0m0.000s 00:03:50.142 14:14:13 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:50.142 14:14:13 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:50.142 ************************************ 00:03:50.142 END TEST asan 00:03:50.142 ************************************ 00:03:50.142 14:14:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:50.142 14:14:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:50.142 14:14:13 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:50.142 14:14:13 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:50.142 14:14:13 -- common/autotest_common.sh@10 -- $ set +x 00:03:50.142 ************************************ 00:03:50.142 START TEST ubsan 00:03:50.142 ************************************ 00:03:50.142 14:14:13 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:50.142 using ubsan 00:03:50.142 00:03:50.142 real 0m0.000s 00:03:50.142 user 0m0.000s 00:03:50.142 sys 0m0.000s 00:03:50.142 14:14:13 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:50.142 14:14:13 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:50.142 ************************************ 00:03:50.142 END TEST ubsan 00:03:50.142 ************************************ 00:03:50.403 14:14:13 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:50.403 14:14:13 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:50.403 14:14:13 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:50.403 14:14:13 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:50.403 14:14:13 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:50.403 14:14:13 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:50.403 14:14:13 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:50.403 14:14:13 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:50.403 14:14:13 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:03:50.403 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:50.403 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:50.664 Using 'verbs' RDMA provider 00:04:06.513 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:18.754 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:18.754 Creating mk/config.mk...done. 00:04:18.754 Creating mk/cc.flags.mk...done. 00:04:18.754 Type 'make' to build. 00:04:18.754 14:14:42 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:04:18.754 14:14:42 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:18.754 14:14:42 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:18.754 14:14:42 -- common/autotest_common.sh@10 -- $ set +x 00:04:18.754 ************************************ 00:04:18.754 START TEST make 00:04:18.754 ************************************ 00:04:18.754 14:14:42 make -- common/autotest_common.sh@1125 -- $ make -j144 00:04:19.016 make[1]: Nothing to be done for 'all'. 00:04:29.019 The Meson build system 00:04:29.019 Version: 1.5.0 00:04:29.019 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:29.019 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:29.019 Build type: native build 00:04:29.019 Program cat found: YES (/usr/bin/cat) 00:04:29.019 Project name: DPDK 00:04:29.019 Project version: 24.03.0 00:04:29.019 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:29.019 C linker for the host machine: cc ld.bfd 2.40-14 00:04:29.019 Host machine cpu family: x86_64 00:04:29.019 Host machine cpu: x86_64 00:04:29.019 Message: ## Building in Developer Mode ## 00:04:29.019 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:29.019 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:29.019 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:29.019 Program python3 found: YES (/usr/bin/python3) 00:04:29.019 Program cat found: YES (/usr/bin/cat) 00:04:29.019 Compiler for C supports arguments -march=native: YES 00:04:29.019 Checking for size of "void *" : 8 00:04:29.019 Checking for size of "void *" : 8 (cached) 00:04:29.019 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:29.019 Library m found: YES 00:04:29.019 Library numa found: YES 00:04:29.019 Has header "numaif.h" : YES 00:04:29.019 Library fdt found: NO 00:04:29.019 Library execinfo found: NO 00:04:29.019 Has header "execinfo.h" : YES 00:04:29.019 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:29.019 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:29.019 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:29.019 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:29.019 Run-time dependency openssl found: YES 3.1.1 00:04:29.019 Run-time dependency libpcap found: YES 1.10.4 00:04:29.019 Has header "pcap.h" with dependency libpcap: YES 00:04:29.019 Compiler for C supports arguments -Wcast-qual: YES 00:04:29.019 Compiler for C supports arguments -Wdeprecated: YES 00:04:29.019 Compiler for C supports arguments -Wformat: YES 00:04:29.019 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:29.019 Compiler for C supports arguments -Wformat-security: NO 00:04:29.019 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:29.019 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:29.019 Compiler for C supports arguments -Wnested-externs: YES 00:04:29.019 Compiler for C supports arguments -Wold-style-definition: YES 00:04:29.019 Compiler for C supports arguments -Wpointer-arith: YES 00:04:29.019 Compiler for C supports arguments -Wsign-compare: YES 00:04:29.019 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:29.019 Compiler for C supports arguments -Wundef: YES 00:04:29.019 Compiler for C supports arguments -Wwrite-strings: YES 00:04:29.019 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:29.019 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:29.019 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:29.019 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:29.019 Program objdump found: YES (/usr/bin/objdump) 00:04:29.019 Compiler for C supports arguments -mavx512f: YES 00:04:29.019 Checking if "AVX512 checking" compiles: YES 00:04:29.019 Fetching value of define "__SSE4_2__" : 1 00:04:29.019 Fetching value of define "__AES__" : 1 00:04:29.019 Fetching value of define "__AVX__" : 1 00:04:29.019 Fetching value of define "__AVX2__" : 1 00:04:29.019 Fetching value of define "__AVX512BW__" : 1 00:04:29.019 Fetching value of define "__AVX512CD__" : 1 00:04:29.019 Fetching value of define "__AVX512DQ__" : 1 00:04:29.019 Fetching value of define "__AVX512F__" : 1 00:04:29.019 Fetching value of define "__AVX512VL__" : 1 00:04:29.019 Fetching value of define "__PCLMUL__" : 1 00:04:29.019 Fetching value of define "__RDRND__" : 1 00:04:29.019 Fetching value of define "__RDSEED__" : 1 00:04:29.019 Fetching value of define "__VPCLMULQDQ__" : 1 00:04:29.019 Fetching value of define "__znver1__" : (undefined) 00:04:29.019 Fetching value of define "__znver2__" : (undefined) 00:04:29.019 Fetching value of define "__znver3__" : (undefined) 00:04:29.019 Fetching value of define "__znver4__" : (undefined) 00:04:29.019 Library asan found: YES 00:04:29.019 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:29.019 Message: lib/log: Defining dependency "log" 00:04:29.019 Message: lib/kvargs: Defining dependency "kvargs" 00:04:29.019 Message: lib/telemetry: Defining dependency "telemetry" 00:04:29.019 Library rt found: YES 00:04:29.019 Checking for function "getentropy" : NO 00:04:29.019 Message: lib/eal: Defining dependency "eal" 00:04:29.019 Message: lib/ring: Defining dependency "ring" 00:04:29.019 Message: lib/rcu: Defining dependency "rcu" 00:04:29.019 Message: lib/mempool: Defining dependency "mempool" 00:04:29.019 Message: lib/mbuf: Defining dependency "mbuf" 00:04:29.019 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:29.019 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:29.019 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:29.019 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:29.019 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:29.019 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:04:29.019 Compiler for C supports arguments -mpclmul: YES 00:04:29.019 Compiler for C supports arguments -maes: YES 00:04:29.019 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:29.019 Compiler for C supports arguments -mavx512bw: YES 00:04:29.019 Compiler for C supports arguments -mavx512dq: YES 00:04:29.019 Compiler for C supports arguments -mavx512vl: YES 00:04:29.019 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:29.019 Compiler for C supports arguments -mavx2: YES 00:04:29.020 Compiler for C supports arguments -mavx: YES 00:04:29.020 Message: lib/net: Defining dependency "net" 00:04:29.020 Message: lib/meter: Defining dependency "meter" 00:04:29.020 Message: lib/ethdev: Defining dependency "ethdev" 00:04:29.020 Message: lib/pci: Defining dependency "pci" 00:04:29.020 Message: lib/cmdline: Defining dependency "cmdline" 00:04:29.020 Message: lib/hash: Defining dependency "hash" 00:04:29.020 Message: lib/timer: Defining dependency "timer" 00:04:29.020 Message: lib/compressdev: Defining dependency "compressdev" 00:04:29.020 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:29.020 Message: lib/dmadev: Defining dependency "dmadev" 00:04:29.020 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:29.020 Message: lib/power: Defining dependency "power" 00:04:29.020 Message: lib/reorder: Defining dependency "reorder" 00:04:29.020 Message: lib/security: Defining dependency "security" 00:04:29.020 Has header "linux/userfaultfd.h" : YES 00:04:29.020 Has header "linux/vduse.h" : YES 00:04:29.020 Message: lib/vhost: Defining dependency "vhost" 00:04:29.020 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:29.020 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:29.020 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:29.020 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:29.020 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:29.020 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:29.020 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:29.020 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:29.020 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:29.020 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:29.020 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:29.020 Configuring doxy-api-html.conf using configuration 00:04:29.020 Configuring doxy-api-man.conf using configuration 00:04:29.020 Program mandb found: YES (/usr/bin/mandb) 00:04:29.020 Program sphinx-build found: NO 00:04:29.020 Configuring rte_build_config.h using configuration 00:04:29.020 Message: 00:04:29.020 ================= 00:04:29.020 Applications Enabled 00:04:29.020 ================= 00:04:29.020 00:04:29.020 apps: 00:04:29.020 00:04:29.020 00:04:29.020 Message: 00:04:29.020 ================= 00:04:29.020 Libraries Enabled 00:04:29.020 ================= 00:04:29.020 00:04:29.020 libs: 00:04:29.020 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:29.020 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:29.020 cryptodev, dmadev, power, reorder, security, vhost, 00:04:29.020 00:04:29.020 Message: 00:04:29.020 =============== 00:04:29.020 Drivers Enabled 00:04:29.020 =============== 00:04:29.020 00:04:29.020 common: 00:04:29.020 00:04:29.020 bus: 00:04:29.020 pci, vdev, 00:04:29.020 mempool: 00:04:29.020 ring, 00:04:29.020 dma: 00:04:29.020 00:04:29.020 net: 00:04:29.020 00:04:29.020 crypto: 00:04:29.020 00:04:29.020 compress: 00:04:29.020 00:04:29.020 vdpa: 00:04:29.020 00:04:29.020 00:04:29.020 Message: 00:04:29.020 ================= 00:04:29.020 Content Skipped 00:04:29.020 ================= 00:04:29.020 00:04:29.020 apps: 00:04:29.020 dumpcap: explicitly disabled via build config 00:04:29.020 graph: explicitly disabled via build config 00:04:29.020 pdump: explicitly disabled via build config 00:04:29.020 proc-info: explicitly disabled via build config 00:04:29.020 test-acl: explicitly disabled via build config 00:04:29.020 test-bbdev: explicitly disabled via build config 00:04:29.020 test-cmdline: explicitly disabled via build config 00:04:29.020 test-compress-perf: explicitly disabled via build config 00:04:29.020 test-crypto-perf: explicitly disabled via build config 00:04:29.020 test-dma-perf: explicitly disabled via build config 00:04:29.020 test-eventdev: explicitly disabled via build config 00:04:29.020 test-fib: explicitly disabled via build config 00:04:29.020 test-flow-perf: explicitly disabled via build config 00:04:29.020 test-gpudev: explicitly disabled via build config 00:04:29.020 test-mldev: explicitly disabled via build config 00:04:29.020 test-pipeline: explicitly disabled via build config 00:04:29.020 test-pmd: explicitly disabled via build config 00:04:29.020 test-regex: explicitly disabled via build config 00:04:29.020 test-sad: explicitly disabled via build config 00:04:29.020 test-security-perf: explicitly disabled via build config 00:04:29.020 00:04:29.020 libs: 00:04:29.020 argparse: explicitly disabled via build config 00:04:29.020 metrics: explicitly disabled via build config 00:04:29.020 acl: explicitly disabled via build config 00:04:29.020 bbdev: explicitly disabled via build config 00:04:29.020 bitratestats: explicitly disabled via build config 00:04:29.020 bpf: explicitly disabled via build config 00:04:29.020 cfgfile: explicitly disabled via build config 00:04:29.020 distributor: explicitly disabled via build config 00:04:29.020 efd: explicitly disabled via build config 00:04:29.020 eventdev: explicitly disabled via build config 00:04:29.020 dispatcher: explicitly disabled via build config 00:04:29.020 gpudev: explicitly disabled via build config 00:04:29.020 gro: explicitly disabled via build config 00:04:29.020 gso: explicitly disabled via build config 00:04:29.020 ip_frag: explicitly disabled via build config 00:04:29.020 jobstats: explicitly disabled via build config 00:04:29.020 latencystats: explicitly disabled via build config 00:04:29.020 lpm: explicitly disabled via build config 00:04:29.020 member: explicitly disabled via build config 00:04:29.020 pcapng: explicitly disabled via build config 00:04:29.020 rawdev: explicitly disabled via build config 00:04:29.020 regexdev: explicitly disabled via build config 00:04:29.020 mldev: explicitly disabled via build config 00:04:29.020 rib: explicitly disabled via build config 00:04:29.020 sched: explicitly disabled via build config 00:04:29.020 stack: explicitly disabled via build config 00:04:29.020 ipsec: explicitly disabled via build config 00:04:29.020 pdcp: explicitly disabled via build config 00:04:29.020 fib: explicitly disabled via build config 00:04:29.020 port: explicitly disabled via build config 00:04:29.020 pdump: explicitly disabled via build config 00:04:29.020 table: explicitly disabled via build config 00:04:29.020 pipeline: explicitly disabled via build config 00:04:29.020 graph: explicitly disabled via build config 00:04:29.020 node: explicitly disabled via build config 00:04:29.020 00:04:29.020 drivers: 00:04:29.020 common/cpt: not in enabled drivers build config 00:04:29.020 common/dpaax: not in enabled drivers build config 00:04:29.020 common/iavf: not in enabled drivers build config 00:04:29.020 common/idpf: not in enabled drivers build config 00:04:29.020 common/ionic: not in enabled drivers build config 00:04:29.020 common/mvep: not in enabled drivers build config 00:04:29.020 common/octeontx: not in enabled drivers build config 00:04:29.020 bus/auxiliary: not in enabled drivers build config 00:04:29.020 bus/cdx: not in enabled drivers build config 00:04:29.020 bus/dpaa: not in enabled drivers build config 00:04:29.020 bus/fslmc: not in enabled drivers build config 00:04:29.020 bus/ifpga: not in enabled drivers build config 00:04:29.020 bus/platform: not in enabled drivers build config 00:04:29.020 bus/uacce: not in enabled drivers build config 00:04:29.020 bus/vmbus: not in enabled drivers build config 00:04:29.020 common/cnxk: not in enabled drivers build config 00:04:29.020 common/mlx5: not in enabled drivers build config 00:04:29.020 common/nfp: not in enabled drivers build config 00:04:29.020 common/nitrox: not in enabled drivers build config 00:04:29.020 common/qat: not in enabled drivers build config 00:04:29.020 common/sfc_efx: not in enabled drivers build config 00:04:29.020 mempool/bucket: not in enabled drivers build config 00:04:29.020 mempool/cnxk: not in enabled drivers build config 00:04:29.020 mempool/dpaa: not in enabled drivers build config 00:04:29.020 mempool/dpaa2: not in enabled drivers build config 00:04:29.020 mempool/octeontx: not in enabled drivers build config 00:04:29.020 mempool/stack: not in enabled drivers build config 00:04:29.020 dma/cnxk: not in enabled drivers build config 00:04:29.020 dma/dpaa: not in enabled drivers build config 00:04:29.020 dma/dpaa2: not in enabled drivers build config 00:04:29.020 dma/hisilicon: not in enabled drivers build config 00:04:29.020 dma/idxd: not in enabled drivers build config 00:04:29.020 dma/ioat: not in enabled drivers build config 00:04:29.020 dma/skeleton: not in enabled drivers build config 00:04:29.020 net/af_packet: not in enabled drivers build config 00:04:29.020 net/af_xdp: not in enabled drivers build config 00:04:29.020 net/ark: not in enabled drivers build config 00:04:29.020 net/atlantic: not in enabled drivers build config 00:04:29.020 net/avp: not in enabled drivers build config 00:04:29.020 net/axgbe: not in enabled drivers build config 00:04:29.020 net/bnx2x: not in enabled drivers build config 00:04:29.020 net/bnxt: not in enabled drivers build config 00:04:29.020 net/bonding: not in enabled drivers build config 00:04:29.020 net/cnxk: not in enabled drivers build config 00:04:29.021 net/cpfl: not in enabled drivers build config 00:04:29.021 net/cxgbe: not in enabled drivers build config 00:04:29.021 net/dpaa: not in enabled drivers build config 00:04:29.021 net/dpaa2: not in enabled drivers build config 00:04:29.021 net/e1000: not in enabled drivers build config 00:04:29.021 net/ena: not in enabled drivers build config 00:04:29.021 net/enetc: not in enabled drivers build config 00:04:29.021 net/enetfec: not in enabled drivers build config 00:04:29.021 net/enic: not in enabled drivers build config 00:04:29.021 net/failsafe: not in enabled drivers build config 00:04:29.021 net/fm10k: not in enabled drivers build config 00:04:29.021 net/gve: not in enabled drivers build config 00:04:29.021 net/hinic: not in enabled drivers build config 00:04:29.021 net/hns3: not in enabled drivers build config 00:04:29.021 net/i40e: not in enabled drivers build config 00:04:29.021 net/iavf: not in enabled drivers build config 00:04:29.021 net/ice: not in enabled drivers build config 00:04:29.021 net/idpf: not in enabled drivers build config 00:04:29.021 net/igc: not in enabled drivers build config 00:04:29.021 net/ionic: not in enabled drivers build config 00:04:29.021 net/ipn3ke: not in enabled drivers build config 00:04:29.021 net/ixgbe: not in enabled drivers build config 00:04:29.021 net/mana: not in enabled drivers build config 00:04:29.021 net/memif: not in enabled drivers build config 00:04:29.021 net/mlx4: not in enabled drivers build config 00:04:29.021 net/mlx5: not in enabled drivers build config 00:04:29.021 net/mvneta: not in enabled drivers build config 00:04:29.021 net/mvpp2: not in enabled drivers build config 00:04:29.021 net/netvsc: not in enabled drivers build config 00:04:29.021 net/nfb: not in enabled drivers build config 00:04:29.021 net/nfp: not in enabled drivers build config 00:04:29.021 net/ngbe: not in enabled drivers build config 00:04:29.021 net/null: not in enabled drivers build config 00:04:29.021 net/octeontx: not in enabled drivers build config 00:04:29.021 net/octeon_ep: not in enabled drivers build config 00:04:29.021 net/pcap: not in enabled drivers build config 00:04:29.021 net/pfe: not in enabled drivers build config 00:04:29.021 net/qede: not in enabled drivers build config 00:04:29.021 net/ring: not in enabled drivers build config 00:04:29.021 net/sfc: not in enabled drivers build config 00:04:29.021 net/softnic: not in enabled drivers build config 00:04:29.021 net/tap: not in enabled drivers build config 00:04:29.021 net/thunderx: not in enabled drivers build config 00:04:29.021 net/txgbe: not in enabled drivers build config 00:04:29.021 net/vdev_netvsc: not in enabled drivers build config 00:04:29.021 net/vhost: not in enabled drivers build config 00:04:29.021 net/virtio: not in enabled drivers build config 00:04:29.021 net/vmxnet3: not in enabled drivers build config 00:04:29.021 raw/*: missing internal dependency, "rawdev" 00:04:29.021 crypto/armv8: not in enabled drivers build config 00:04:29.021 crypto/bcmfs: not in enabled drivers build config 00:04:29.021 crypto/caam_jr: not in enabled drivers build config 00:04:29.021 crypto/ccp: not in enabled drivers build config 00:04:29.021 crypto/cnxk: not in enabled drivers build config 00:04:29.021 crypto/dpaa_sec: not in enabled drivers build config 00:04:29.021 crypto/dpaa2_sec: not in enabled drivers build config 00:04:29.021 crypto/ipsec_mb: not in enabled drivers build config 00:04:29.021 crypto/mlx5: not in enabled drivers build config 00:04:29.021 crypto/mvsam: not in enabled drivers build config 00:04:29.021 crypto/nitrox: not in enabled drivers build config 00:04:29.021 crypto/null: not in enabled drivers build config 00:04:29.021 crypto/octeontx: not in enabled drivers build config 00:04:29.021 crypto/openssl: not in enabled drivers build config 00:04:29.021 crypto/scheduler: not in enabled drivers build config 00:04:29.021 crypto/uadk: not in enabled drivers build config 00:04:29.021 crypto/virtio: not in enabled drivers build config 00:04:29.021 compress/isal: not in enabled drivers build config 00:04:29.021 compress/mlx5: not in enabled drivers build config 00:04:29.021 compress/nitrox: not in enabled drivers build config 00:04:29.021 compress/octeontx: not in enabled drivers build config 00:04:29.021 compress/zlib: not in enabled drivers build config 00:04:29.021 regex/*: missing internal dependency, "regexdev" 00:04:29.021 ml/*: missing internal dependency, "mldev" 00:04:29.021 vdpa/ifc: not in enabled drivers build config 00:04:29.021 vdpa/mlx5: not in enabled drivers build config 00:04:29.021 vdpa/nfp: not in enabled drivers build config 00:04:29.021 vdpa/sfc: not in enabled drivers build config 00:04:29.021 event/*: missing internal dependency, "eventdev" 00:04:29.021 baseband/*: missing internal dependency, "bbdev" 00:04:29.021 gpu/*: missing internal dependency, "gpudev" 00:04:29.021 00:04:29.021 00:04:29.021 Build targets in project: 84 00:04:29.021 00:04:29.021 DPDK 24.03.0 00:04:29.021 00:04:29.021 User defined options 00:04:29.021 buildtype : debug 00:04:29.021 default_library : shared 00:04:29.021 libdir : lib 00:04:29.021 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:29.021 b_sanitize : address 00:04:29.021 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:29.021 c_link_args : 00:04:29.021 cpu_instruction_set: native 00:04:29.021 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:04:29.021 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:04:29.021 enable_docs : false 00:04:29.021 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:29.021 enable_kmods : false 00:04:29.021 max_lcores : 128 00:04:29.021 tests : false 00:04:29.021 00:04:29.021 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:29.021 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:29.021 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:29.021 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:29.021 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:29.021 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:29.021 [5/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:29.021 [6/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:29.021 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:29.021 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:29.021 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:29.021 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:29.021 [11/267] Linking static target lib/librte_kvargs.a 00:04:29.021 [12/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:29.021 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:29.021 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:29.021 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:29.021 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:29.021 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:29.021 [18/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:29.021 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:29.021 [20/267] Linking static target lib/librte_log.a 00:04:29.021 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:29.021 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:29.021 [23/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:29.021 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:29.021 [25/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:29.021 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:29.021 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:29.021 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:29.021 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:29.021 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:29.021 [31/267] Linking static target lib/librte_pci.a 00:04:29.021 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:29.021 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:29.021 [34/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:29.021 [35/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:29.021 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:29.021 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:29.021 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:29.021 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:29.021 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.021 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:29.021 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:29.021 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:29.021 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:29.021 [45/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:29.021 [46/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.021 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:29.021 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:29.021 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:29.021 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:29.021 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:29.021 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:29.021 [53/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:29.021 [54/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:29.021 [55/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:29.021 [56/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:29.021 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:29.021 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:29.022 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:29.022 [60/267] Linking static target lib/librte_meter.a 00:04:29.022 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:29.022 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:29.022 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:29.022 [64/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:29.022 [65/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:29.022 [66/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:29.022 [67/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:29.022 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:29.022 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:29.022 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:29.022 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:29.022 [72/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:29.022 [73/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:29.022 [74/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:29.022 [75/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:04:29.022 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:29.022 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:29.022 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:29.022 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:29.022 [80/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:29.022 [81/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:29.022 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:29.022 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:29.022 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:29.022 [85/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:29.022 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:29.022 [87/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:29.022 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:29.022 [89/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:29.022 [90/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:29.022 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:29.022 [92/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:29.022 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:29.022 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:29.022 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:29.022 [96/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:29.022 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:29.022 [98/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:29.022 [99/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:29.022 [100/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:29.022 [101/267] Linking static target lib/librte_ring.a 00:04:29.022 [102/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:29.022 [103/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:29.022 [104/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:29.022 [105/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:29.022 [106/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:29.022 [107/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:29.022 [108/267] Linking static target lib/librte_cmdline.a 00:04:29.022 [109/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:29.022 [110/267] Linking static target lib/librte_telemetry.a 00:04:29.022 [111/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:29.022 [112/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:29.022 [113/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:29.022 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:29.022 [115/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:29.022 [116/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:29.022 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:29.022 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:29.022 [119/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.022 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:29.022 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:29.022 [122/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:29.022 [123/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:29.022 [124/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:29.022 [125/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:29.022 [126/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:29.022 [127/267] Linking target lib/librte_log.so.24.1 00:04:29.022 [128/267] Linking static target lib/librte_timer.a 00:04:29.022 [129/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:29.022 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:29.022 [131/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:29.022 [132/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:29.022 [133/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:29.022 [134/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:29.022 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:29.022 [136/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:29.022 [137/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:29.022 [138/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:29.022 [139/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:29.022 [140/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:29.022 [141/267] Linking static target lib/librte_dmadev.a 00:04:29.022 [142/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:29.022 [143/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:29.022 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:29.022 [145/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:29.022 [146/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:29.022 [147/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:29.022 [148/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:29.022 [149/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:29.022 [150/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:29.022 [151/267] Linking static target lib/librte_reorder.a 00:04:29.022 [152/267] Linking static target lib/librte_net.a 00:04:29.022 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:29.022 [154/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.022 [155/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:29.022 [156/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:29.022 [157/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:29.022 [158/267] Linking static target lib/librte_compressdev.a 00:04:29.022 [159/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:29.022 [160/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:29.022 [161/267] Linking static target lib/librte_rcu.a 00:04:29.022 [162/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:29.022 [163/267] Linking static target lib/librte_mempool.a 00:04:29.022 [164/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:29.022 [165/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:29.022 [166/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:29.284 [167/267] Linking static target lib/librte_power.a 00:04:29.284 [168/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:29.284 [169/267] Linking target lib/librte_kvargs.so.24.1 00:04:29.284 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:29.284 [171/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:29.284 [172/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:29.284 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:29.284 [174/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:29.284 [175/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:29.284 [176/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:29.284 [177/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:29.284 [178/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:29.284 [179/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.284 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:29.284 [181/267] Linking static target lib/librte_eal.a 00:04:29.284 [182/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:29.284 [183/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:29.284 [184/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:29.284 [185/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:29.284 [186/267] Linking static target drivers/librte_bus_vdev.a 00:04:29.284 [187/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:29.284 [188/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:29.284 [189/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:29.284 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:29.284 [191/267] Linking static target drivers/librte_bus_pci.a 00:04:29.284 [192/267] Linking static target lib/librte_security.a 00:04:29.284 [193/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:29.546 [194/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:29.546 [195/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.546 [196/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:29.546 [197/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:29.546 [198/267] Linking static target drivers/librte_mempool_ring.a 00:04:29.546 [199/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.546 [200/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.546 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:29.546 [202/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.546 [203/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.546 [204/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:29.546 [205/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:29.546 [206/267] Linking target lib/librte_telemetry.so.24.1 00:04:29.546 [207/267] Linking static target lib/librte_mbuf.a 00:04:29.807 [208/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.807 [209/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:29.807 [210/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:29.807 [211/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:29.807 [212/267] Linking static target lib/librte_cryptodev.a 00:04:29.807 [213/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.807 [214/267] Linking static target lib/librte_hash.a 00:04:29.807 [215/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.068 [216/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:30.068 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.068 [218/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.068 [219/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.068 [220/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.329 [221/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.589 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.589 [223/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:30.589 [224/267] Linking static target lib/librte_ethdev.a 00:04:30.850 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.110 [226/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:32.051 [227/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.965 [228/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:33.965 [229/267] Linking static target lib/librte_vhost.a 00:04:35.887 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.100 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.672 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.672 [233/267] Linking target lib/librte_eal.so.24.1 00:04:40.933 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:40.933 [235/267] Linking target lib/librte_ring.so.24.1 00:04:40.933 [236/267] Linking target lib/librte_meter.so.24.1 00:04:40.933 [237/267] Linking target lib/librte_pci.so.24.1 00:04:40.933 [238/267] Linking target lib/librte_timer.so.24.1 00:04:40.933 [239/267] Linking target lib/librte_dmadev.so.24.1 00:04:40.933 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:04:40.933 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:40.933 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:41.195 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:41.195 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:41.195 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:41.195 [246/267] Linking target lib/librte_mempool.so.24.1 00:04:41.195 [247/267] Linking target drivers/librte_bus_pci.so.24.1 00:04:41.195 [248/267] Linking target lib/librte_rcu.so.24.1 00:04:41.195 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:41.195 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:41.195 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:04:41.195 [252/267] Linking target lib/librte_mbuf.so.24.1 00:04:41.456 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:41.456 [254/267] Linking target lib/librte_reorder.so.24.1 00:04:41.456 [255/267] Linking target lib/librte_compressdev.so.24.1 00:04:41.456 [256/267] Linking target lib/librte_net.so.24.1 00:04:41.456 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:04:41.716 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:41.716 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:41.716 [260/267] Linking target lib/librte_cmdline.so.24.1 00:04:41.716 [261/267] Linking target lib/librte_hash.so.24.1 00:04:41.716 [262/267] Linking target lib/librte_security.so.24.1 00:04:41.716 [263/267] Linking target lib/librte_ethdev.so.24.1 00:04:41.716 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:41.716 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:41.977 [266/267] Linking target lib/librte_power.so.24.1 00:04:41.977 [267/267] Linking target lib/librte_vhost.so.24.1 00:04:41.977 INFO: autodetecting backend as ninja 00:04:41.977 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:04:44.524 CC lib/log/log.o 00:04:44.524 CC lib/log/log_flags.o 00:04:44.524 CC lib/log/log_deprecated.o 00:04:44.524 CC lib/ut_mock/mock.o 00:04:44.524 CC lib/ut/ut.o 00:04:44.785 LIB libspdk_ut.a 00:04:44.785 LIB libspdk_log.a 00:04:44.785 SO libspdk_ut.so.2.0 00:04:44.785 LIB libspdk_ut_mock.a 00:04:44.785 SO libspdk_ut_mock.so.6.0 00:04:44.785 SO libspdk_log.so.7.0 00:04:44.785 SYMLINK libspdk_ut.so 00:04:44.785 SYMLINK libspdk_ut_mock.so 00:04:44.785 SYMLINK libspdk_log.so 00:04:45.357 CC lib/util/base64.o 00:04:45.357 CC lib/util/bit_array.o 00:04:45.357 CC lib/util/cpuset.o 00:04:45.357 CC lib/util/crc16.o 00:04:45.357 CC lib/util/crc32c.o 00:04:45.357 CC lib/util/crc32.o 00:04:45.358 CC lib/util/crc32_ieee.o 00:04:45.358 CC lib/util/fd.o 00:04:45.358 CC lib/util/crc64.o 00:04:45.358 CC lib/util/dif.o 00:04:45.358 CC lib/util/fd_group.o 00:04:45.358 CC lib/util/file.o 00:04:45.358 CC lib/util/hexlify.o 00:04:45.358 CC lib/util/iov.o 00:04:45.358 CC lib/util/math.o 00:04:45.358 CC lib/util/net.o 00:04:45.358 CC lib/util/pipe.o 00:04:45.358 CC lib/dma/dma.o 00:04:45.358 CC lib/util/strerror_tls.o 00:04:45.358 CC lib/util/string.o 00:04:45.358 CC lib/util/uuid.o 00:04:45.358 CC lib/ioat/ioat.o 00:04:45.358 CC lib/util/xor.o 00:04:45.358 CC lib/util/zipf.o 00:04:45.358 CC lib/util/md5.o 00:04:45.358 CXX lib/trace_parser/trace.o 00:04:45.358 CC lib/vfio_user/host/vfio_user_pci.o 00:04:45.358 CC lib/vfio_user/host/vfio_user.o 00:04:45.358 LIB libspdk_dma.a 00:04:45.358 SO libspdk_dma.so.5.0 00:04:45.618 LIB libspdk_ioat.a 00:04:45.618 SO libspdk_ioat.so.7.0 00:04:45.618 SYMLINK libspdk_dma.so 00:04:45.618 SYMLINK libspdk_ioat.so 00:04:45.618 LIB libspdk_vfio_user.a 00:04:45.618 SO libspdk_vfio_user.so.5.0 00:04:45.879 SYMLINK libspdk_vfio_user.so 00:04:45.879 LIB libspdk_util.a 00:04:45.879 SO libspdk_util.so.10.0 00:04:46.140 SYMLINK libspdk_util.so 00:04:46.140 LIB libspdk_trace_parser.a 00:04:46.140 SO libspdk_trace_parser.so.6.0 00:04:46.400 SYMLINK libspdk_trace_parser.so 00:04:46.401 CC lib/conf/conf.o 00:04:46.401 CC lib/json/json_parse.o 00:04:46.401 CC lib/rdma_utils/rdma_utils.o 00:04:46.401 CC lib/json/json_util.o 00:04:46.401 CC lib/json/json_write.o 00:04:46.401 CC lib/vmd/vmd.o 00:04:46.401 CC lib/rdma_provider/common.o 00:04:46.401 CC lib/vmd/led.o 00:04:46.401 CC lib/env_dpdk/env.o 00:04:46.401 CC lib/env_dpdk/memory.o 00:04:46.401 CC lib/env_dpdk/pci.o 00:04:46.401 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:46.401 CC lib/idxd/idxd.o 00:04:46.401 CC lib/env_dpdk/init.o 00:04:46.401 CC lib/env_dpdk/pci_ioat.o 00:04:46.401 CC lib/env_dpdk/threads.o 00:04:46.401 CC lib/idxd/idxd_user.o 00:04:46.401 CC lib/idxd/idxd_kernel.o 00:04:46.401 CC lib/env_dpdk/pci_virtio.o 00:04:46.401 CC lib/env_dpdk/pci_vmd.o 00:04:46.401 CC lib/env_dpdk/pci_idxd.o 00:04:46.401 CC lib/env_dpdk/pci_event.o 00:04:46.401 CC lib/env_dpdk/sigbus_handler.o 00:04:46.401 CC lib/env_dpdk/pci_dpdk.o 00:04:46.401 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:46.401 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:46.661 LIB libspdk_rdma_provider.a 00:04:46.661 LIB libspdk_conf.a 00:04:46.661 SO libspdk_rdma_provider.so.6.0 00:04:46.661 LIB libspdk_rdma_utils.a 00:04:46.661 SO libspdk_conf.so.6.0 00:04:46.922 LIB libspdk_json.a 00:04:46.922 SO libspdk_rdma_utils.so.1.0 00:04:46.922 SYMLINK libspdk_rdma_provider.so 00:04:46.922 SYMLINK libspdk_conf.so 00:04:46.922 SO libspdk_json.so.6.0 00:04:46.922 SYMLINK libspdk_rdma_utils.so 00:04:46.922 SYMLINK libspdk_json.so 00:04:47.184 LIB libspdk_idxd.a 00:04:47.184 LIB libspdk_vmd.a 00:04:47.184 SO libspdk_idxd.so.12.1 00:04:47.184 CC lib/jsonrpc/jsonrpc_server.o 00:04:47.184 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:47.184 CC lib/jsonrpc/jsonrpc_client.o 00:04:47.184 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:47.184 SO libspdk_vmd.so.6.0 00:04:47.446 SYMLINK libspdk_idxd.so 00:04:47.446 SYMLINK libspdk_vmd.so 00:04:47.446 LIB libspdk_jsonrpc.a 00:04:47.707 SO libspdk_jsonrpc.so.6.0 00:04:47.707 SYMLINK libspdk_jsonrpc.so 00:04:47.969 CC lib/rpc/rpc.o 00:04:47.969 LIB libspdk_env_dpdk.a 00:04:48.230 SO libspdk_env_dpdk.so.15.0 00:04:48.230 LIB libspdk_rpc.a 00:04:48.230 SYMLINK libspdk_env_dpdk.so 00:04:48.230 SO libspdk_rpc.so.6.0 00:04:48.491 SYMLINK libspdk_rpc.so 00:04:48.752 CC lib/trace/trace.o 00:04:48.752 CC lib/trace/trace_flags.o 00:04:48.752 CC lib/trace/trace_rpc.o 00:04:48.752 CC lib/keyring/keyring.o 00:04:48.752 CC lib/keyring/keyring_rpc.o 00:04:48.752 CC lib/notify/notify.o 00:04:48.752 CC lib/notify/notify_rpc.o 00:04:49.014 LIB libspdk_notify.a 00:04:49.014 SO libspdk_notify.so.6.0 00:04:49.014 LIB libspdk_keyring.a 00:04:49.014 LIB libspdk_trace.a 00:04:49.014 SYMLINK libspdk_notify.so 00:04:49.014 SO libspdk_keyring.so.2.0 00:04:49.014 SO libspdk_trace.so.11.0 00:04:49.014 SYMLINK libspdk_keyring.so 00:04:49.014 SYMLINK libspdk_trace.so 00:04:49.586 CC lib/thread/thread.o 00:04:49.586 CC lib/thread/iobuf.o 00:04:49.586 CC lib/sock/sock.o 00:04:49.586 CC lib/sock/sock_rpc.o 00:04:49.847 LIB libspdk_sock.a 00:04:50.108 SO libspdk_sock.so.10.0 00:04:50.108 SYMLINK libspdk_sock.so 00:04:50.370 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:50.370 CC lib/nvme/nvme_ns_cmd.o 00:04:50.370 CC lib/nvme/nvme_ctrlr.o 00:04:50.370 CC lib/nvme/nvme_fabric.o 00:04:50.370 CC lib/nvme/nvme_ns.o 00:04:50.370 CC lib/nvme/nvme_pcie_common.o 00:04:50.370 CC lib/nvme/nvme_pcie.o 00:04:50.370 CC lib/nvme/nvme_qpair.o 00:04:50.370 CC lib/nvme/nvme.o 00:04:50.370 CC lib/nvme/nvme_quirks.o 00:04:50.370 CC lib/nvme/nvme_transport.o 00:04:50.370 CC lib/nvme/nvme_discovery.o 00:04:50.370 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:50.370 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:50.370 CC lib/nvme/nvme_tcp.o 00:04:50.370 CC lib/nvme/nvme_io_msg.o 00:04:50.370 CC lib/nvme/nvme_opal.o 00:04:50.370 CC lib/nvme/nvme_poll_group.o 00:04:50.370 CC lib/nvme/nvme_zns.o 00:04:50.370 CC lib/nvme/nvme_stubs.o 00:04:50.370 CC lib/nvme/nvme_auth.o 00:04:50.370 CC lib/nvme/nvme_cuse.o 00:04:50.370 CC lib/nvme/nvme_rdma.o 00:04:51.312 LIB libspdk_thread.a 00:04:51.312 SO libspdk_thread.so.10.2 00:04:51.312 SYMLINK libspdk_thread.so 00:04:51.574 CC lib/fsdev/fsdev.o 00:04:51.574 CC lib/fsdev/fsdev_io.o 00:04:51.574 CC lib/fsdev/fsdev_rpc.o 00:04:51.574 CC lib/blob/blobstore.o 00:04:51.574 CC lib/accel/accel_rpc.o 00:04:51.574 CC lib/blob/request.o 00:04:51.574 CC lib/accel/accel.o 00:04:51.574 CC lib/blob/zeroes.o 00:04:51.574 CC lib/accel/accel_sw.o 00:04:51.574 CC lib/blob/blob_bs_dev.o 00:04:51.574 CC lib/init/json_config.o 00:04:51.574 CC lib/virtio/virtio.o 00:04:51.574 CC lib/init/subsystem.o 00:04:51.574 CC lib/virtio/virtio_vhost_user.o 00:04:51.574 CC lib/init/subsystem_rpc.o 00:04:51.574 CC lib/virtio/virtio_vfio_user.o 00:04:51.574 CC lib/init/rpc.o 00:04:51.574 CC lib/virtio/virtio_pci.o 00:04:51.835 LIB libspdk_init.a 00:04:52.096 SO libspdk_init.so.6.0 00:04:52.096 LIB libspdk_virtio.a 00:04:52.096 SYMLINK libspdk_init.so 00:04:52.096 SO libspdk_virtio.so.7.0 00:04:52.096 SYMLINK libspdk_virtio.so 00:04:52.357 LIB libspdk_fsdev.a 00:04:52.357 SO libspdk_fsdev.so.1.0 00:04:52.357 CC lib/event/app.o 00:04:52.357 CC lib/event/reactor.o 00:04:52.357 CC lib/event/log_rpc.o 00:04:52.357 CC lib/event/app_rpc.o 00:04:52.357 CC lib/event/scheduler_static.o 00:04:52.357 SYMLINK libspdk_fsdev.so 00:04:52.619 LIB libspdk_nvme.a 00:04:52.880 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:52.880 SO libspdk_nvme.so.14.0 00:04:52.880 LIB libspdk_accel.a 00:04:52.880 SO libspdk_accel.so.16.0 00:04:52.880 LIB libspdk_event.a 00:04:53.142 SO libspdk_event.so.15.0 00:04:53.143 SYMLINK libspdk_accel.so 00:04:53.143 SYMLINK libspdk_event.so 00:04:53.143 SYMLINK libspdk_nvme.so 00:04:53.404 CC lib/bdev/bdev.o 00:04:53.404 CC lib/bdev/bdev_rpc.o 00:04:53.404 CC lib/bdev/bdev_zone.o 00:04:53.404 CC lib/bdev/part.o 00:04:53.404 CC lib/bdev/scsi_nvme.o 00:04:53.665 LIB libspdk_fuse_dispatcher.a 00:04:53.665 SO libspdk_fuse_dispatcher.so.1.0 00:04:53.665 SYMLINK libspdk_fuse_dispatcher.so 00:04:55.580 LIB libspdk_blob.a 00:04:55.580 SO libspdk_blob.so.11.0 00:04:55.580 SYMLINK libspdk_blob.so 00:04:55.841 CC lib/lvol/lvol.o 00:04:55.841 CC lib/blobfs/blobfs.o 00:04:55.841 CC lib/blobfs/tree.o 00:04:56.412 LIB libspdk_bdev.a 00:04:56.412 SO libspdk_bdev.so.17.0 00:04:56.412 SYMLINK libspdk_bdev.so 00:04:56.672 LIB libspdk_blobfs.a 00:04:56.672 SO libspdk_blobfs.so.10.0 00:04:56.673 CC lib/ublk/ublk.o 00:04:56.673 CC lib/ublk/ublk_rpc.o 00:04:56.673 CC lib/nvmf/ctrlr.o 00:04:56.673 CC lib/nvmf/ctrlr_discovery.o 00:04:56.673 CC lib/nvmf/ctrlr_bdev.o 00:04:56.673 CC lib/nvmf/subsystem.o 00:04:56.673 CC lib/nbd/nbd.o 00:04:56.673 CC lib/nvmf/nvmf.o 00:04:56.673 CC lib/nvmf/nvmf_rpc.o 00:04:56.673 CC lib/scsi/dev.o 00:04:56.673 CC lib/nbd/nbd_rpc.o 00:04:56.673 CC lib/nvmf/transport.o 00:04:56.673 CC lib/ftl/ftl_init.o 00:04:56.673 CC lib/scsi/lun.o 00:04:56.673 CC lib/ftl/ftl_core.o 00:04:56.673 CC lib/nvmf/tcp.o 00:04:56.673 CC lib/scsi/port.o 00:04:56.673 CC lib/nvmf/stubs.o 00:04:56.673 CC lib/ftl/ftl_layout.o 00:04:56.673 CC lib/scsi/scsi.o 00:04:56.673 CC lib/nvmf/mdns_server.o 00:04:56.673 CC lib/scsi/scsi_bdev.o 00:04:56.673 CC lib/ftl/ftl_debug.o 00:04:56.673 CC lib/nvmf/rdma.o 00:04:56.673 CC lib/scsi/scsi_pr.o 00:04:56.673 CC lib/nvmf/auth.o 00:04:56.673 CC lib/ftl/ftl_io.o 00:04:56.673 CC lib/ftl/ftl_sb.o 00:04:56.673 CC lib/scsi/scsi_rpc.o 00:04:56.673 CC lib/ftl/ftl_l2p.o 00:04:56.673 CC lib/scsi/task.o 00:04:56.673 CC lib/ftl/ftl_l2p_flat.o 00:04:56.673 CC lib/ftl/ftl_nv_cache.o 00:04:56.673 CC lib/ftl/ftl_band.o 00:04:56.673 CC lib/ftl/ftl_band_ops.o 00:04:56.673 CC lib/ftl/ftl_writer.o 00:04:56.673 CC lib/ftl/ftl_rq.o 00:04:56.673 CC lib/ftl/ftl_l2p_cache.o 00:04:56.673 CC lib/ftl/ftl_reloc.o 00:04:56.673 CC lib/ftl/ftl_p2l.o 00:04:56.673 CC lib/ftl/ftl_p2l_log.o 00:04:56.673 CC lib/ftl/mngt/ftl_mngt.o 00:04:56.673 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:56.673 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:56.673 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:56.673 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:56.673 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:56.673 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:56.673 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:56.673 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:56.673 LIB libspdk_lvol.a 00:04:56.673 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:56.673 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:56.673 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:56.932 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:56.932 CC lib/ftl/utils/ftl_conf.o 00:04:56.932 CC lib/ftl/utils/ftl_md.o 00:04:56.932 CC lib/ftl/utils/ftl_bitmap.o 00:04:56.932 CC lib/ftl/utils/ftl_mempool.o 00:04:56.932 CC lib/ftl/utils/ftl_property.o 00:04:56.932 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:56.932 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:56.932 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:56.932 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:56.932 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:56.932 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:56.932 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:56.932 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:56.932 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:56.932 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:56.932 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:56.932 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:56.932 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:56.932 CC lib/ftl/base/ftl_base_dev.o 00:04:56.932 CC lib/ftl/base/ftl_base_bdev.o 00:04:56.932 SYMLINK libspdk_blobfs.so 00:04:56.932 CC lib/ftl/ftl_trace.o 00:04:56.932 SO libspdk_lvol.so.10.0 00:04:56.932 SYMLINK libspdk_lvol.so 00:04:57.192 LIB libspdk_nbd.a 00:04:57.453 SO libspdk_nbd.so.7.0 00:04:57.453 SYMLINK libspdk_nbd.so 00:04:57.453 LIB libspdk_scsi.a 00:04:57.453 SO libspdk_scsi.so.9.0 00:04:57.714 LIB libspdk_ublk.a 00:04:57.714 SYMLINK libspdk_scsi.so 00:04:57.714 SO libspdk_ublk.so.3.0 00:04:57.714 SYMLINK libspdk_ublk.so 00:04:57.976 CC lib/vhost/vhost.o 00:04:57.976 CC lib/vhost/vhost_rpc.o 00:04:57.976 CC lib/vhost/vhost_scsi.o 00:04:57.976 CC lib/vhost/vhost_blk.o 00:04:57.976 CC lib/vhost/rte_vhost_user.o 00:04:57.976 CC lib/iscsi/conn.o 00:04:57.976 CC lib/iscsi/init_grp.o 00:04:57.976 CC lib/iscsi/iscsi.o 00:04:57.976 CC lib/iscsi/param.o 00:04:57.976 CC lib/iscsi/portal_grp.o 00:04:57.976 CC lib/iscsi/tgt_node.o 00:04:57.976 CC lib/iscsi/iscsi_subsystem.o 00:04:57.976 CC lib/iscsi/iscsi_rpc.o 00:04:57.976 CC lib/iscsi/task.o 00:04:57.976 LIB libspdk_ftl.a 00:04:58.238 SO libspdk_ftl.so.9.0 00:04:58.500 SYMLINK libspdk_ftl.so 00:04:59.073 LIB libspdk_vhost.a 00:04:59.074 SO libspdk_vhost.so.8.0 00:04:59.334 SYMLINK libspdk_vhost.so 00:04:59.334 LIB libspdk_nvmf.a 00:04:59.334 SO libspdk_nvmf.so.19.0 00:04:59.596 LIB libspdk_iscsi.a 00:04:59.596 SYMLINK libspdk_nvmf.so 00:04:59.596 SO libspdk_iscsi.so.8.0 00:04:59.858 SYMLINK libspdk_iscsi.so 00:05:00.430 CC module/env_dpdk/env_dpdk_rpc.o 00:05:00.430 CC module/accel/iaa/accel_iaa.o 00:05:00.430 CC module/accel/iaa/accel_iaa_rpc.o 00:05:00.430 CC module/accel/ioat/accel_ioat.o 00:05:00.430 CC module/accel/ioat/accel_ioat_rpc.o 00:05:00.430 CC module/accel/error/accel_error_rpc.o 00:05:00.430 CC module/accel/error/accel_error.o 00:05:00.430 CC module/blob/bdev/blob_bdev.o 00:05:00.430 LIB libspdk_env_dpdk_rpc.a 00:05:00.430 CC module/accel/dsa/accel_dsa.o 00:05:00.430 CC module/accel/dsa/accel_dsa_rpc.o 00:05:00.430 CC module/fsdev/aio/fsdev_aio.o 00:05:00.430 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:00.430 CC module/fsdev/aio/linux_aio_mgr.o 00:05:00.430 CC module/sock/posix/posix.o 00:05:00.430 CC module/keyring/file/keyring_rpc.o 00:05:00.430 CC module/keyring/file/keyring.o 00:05:00.430 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:00.430 CC module/scheduler/gscheduler/gscheduler.o 00:05:00.430 CC module/keyring/linux/keyring.o 00:05:00.430 CC module/keyring/linux/keyring_rpc.o 00:05:00.430 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:00.430 SO libspdk_env_dpdk_rpc.so.6.0 00:05:00.691 SYMLINK libspdk_env_dpdk_rpc.so 00:05:00.691 LIB libspdk_keyring_file.a 00:05:00.691 LIB libspdk_keyring_linux.a 00:05:00.691 LIB libspdk_scheduler_dpdk_governor.a 00:05:00.691 LIB libspdk_scheduler_gscheduler.a 00:05:00.691 LIB libspdk_accel_error.a 00:05:00.691 LIB libspdk_accel_ioat.a 00:05:00.691 SO libspdk_keyring_file.so.2.0 00:05:00.691 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:00.691 SO libspdk_keyring_linux.so.1.0 00:05:00.691 SO libspdk_scheduler_gscheduler.so.4.0 00:05:00.691 LIB libspdk_accel_iaa.a 00:05:00.691 SO libspdk_accel_error.so.2.0 00:05:00.691 SO libspdk_accel_ioat.so.6.0 00:05:00.691 LIB libspdk_scheduler_dynamic.a 00:05:00.691 SO libspdk_scheduler_dynamic.so.4.0 00:05:00.691 SO libspdk_accel_iaa.so.3.0 00:05:00.691 SYMLINK libspdk_keyring_linux.so 00:05:00.952 SYMLINK libspdk_scheduler_gscheduler.so 00:05:00.952 SYMLINK libspdk_keyring_file.so 00:05:00.952 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:00.952 LIB libspdk_blob_bdev.a 00:05:00.952 SYMLINK libspdk_accel_error.so 00:05:00.952 SYMLINK libspdk_accel_ioat.so 00:05:00.952 LIB libspdk_accel_dsa.a 00:05:00.952 SYMLINK libspdk_scheduler_dynamic.so 00:05:00.952 SO libspdk_blob_bdev.so.11.0 00:05:00.952 SYMLINK libspdk_accel_iaa.so 00:05:00.952 SO libspdk_accel_dsa.so.5.0 00:05:00.952 SYMLINK libspdk_blob_bdev.so 00:05:00.952 SYMLINK libspdk_accel_dsa.so 00:05:01.214 LIB libspdk_fsdev_aio.a 00:05:01.214 SO libspdk_fsdev_aio.so.1.0 00:05:01.475 LIB libspdk_sock_posix.a 00:05:01.475 SYMLINK libspdk_fsdev_aio.so 00:05:01.475 SO libspdk_sock_posix.so.6.0 00:05:01.475 CC module/blobfs/bdev/blobfs_bdev.o 00:05:01.475 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:01.475 CC module/bdev/malloc/bdev_malloc.o 00:05:01.475 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:01.475 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:01.475 CC module/bdev/ftl/bdev_ftl.o 00:05:01.475 CC module/bdev/lvol/vbdev_lvol.o 00:05:01.475 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:01.475 CC module/bdev/error/vbdev_error.o 00:05:01.475 CC module/bdev/gpt/gpt.o 00:05:01.475 CC module/bdev/null/bdev_null.o 00:05:01.475 CC module/bdev/gpt/vbdev_gpt.o 00:05:01.475 CC module/bdev/error/vbdev_error_rpc.o 00:05:01.475 CC module/bdev/null/bdev_null_rpc.o 00:05:01.475 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:01.475 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:01.475 CC module/bdev/aio/bdev_aio.o 00:05:01.475 CC module/bdev/aio/bdev_aio_rpc.o 00:05:01.475 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:01.475 CC module/bdev/delay/vbdev_delay.o 00:05:01.475 CC module/bdev/raid/bdev_raid.o 00:05:01.475 CC module/bdev/raid/bdev_raid_rpc.o 00:05:01.475 CC module/bdev/passthru/vbdev_passthru.o 00:05:01.475 CC module/bdev/raid/bdev_raid_sb.o 00:05:01.475 CC module/bdev/raid/raid0.o 00:05:01.475 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:01.475 CC module/bdev/split/vbdev_split.o 00:05:01.475 CC module/bdev/raid/raid1.o 00:05:01.475 CC module/bdev/nvme/bdev_nvme.o 00:05:01.475 CC module/bdev/raid/concat.o 00:05:01.475 CC module/bdev/iscsi/bdev_iscsi.o 00:05:01.475 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:01.475 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:01.475 CC module/bdev/split/vbdev_split_rpc.o 00:05:01.475 CC module/bdev/nvme/nvme_rpc.o 00:05:01.475 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:01.475 CC module/bdev/nvme/bdev_mdns_client.o 00:05:01.475 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:01.475 CC module/bdev/nvme/vbdev_opal.o 00:05:01.475 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:01.475 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:01.475 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:01.475 SYMLINK libspdk_sock_posix.so 00:05:01.735 LIB libspdk_blobfs_bdev.a 00:05:01.735 SO libspdk_blobfs_bdev.so.6.0 00:05:01.735 LIB libspdk_bdev_ftl.a 00:05:01.735 LIB libspdk_bdev_split.a 00:05:01.735 SO libspdk_bdev_ftl.so.6.0 00:05:01.736 LIB libspdk_bdev_gpt.a 00:05:01.736 LIB libspdk_bdev_null.a 00:05:01.736 SYMLINK libspdk_blobfs_bdev.so 00:05:01.736 LIB libspdk_bdev_error.a 00:05:01.736 SO libspdk_bdev_split.so.6.0 00:05:01.736 SO libspdk_bdev_gpt.so.6.0 00:05:01.996 SO libspdk_bdev_null.so.6.0 00:05:01.996 SO libspdk_bdev_error.so.6.0 00:05:01.996 SYMLINK libspdk_bdev_ftl.so 00:05:01.996 LIB libspdk_bdev_passthru.a 00:05:01.997 LIB libspdk_bdev_zone_block.a 00:05:01.997 LIB libspdk_bdev_malloc.a 00:05:01.997 SYMLINK libspdk_bdev_split.so 00:05:01.997 SYMLINK libspdk_bdev_error.so 00:05:01.997 SYMLINK libspdk_bdev_gpt.so 00:05:01.997 LIB libspdk_bdev_aio.a 00:05:01.997 SYMLINK libspdk_bdev_null.so 00:05:01.997 SO libspdk_bdev_passthru.so.6.0 00:05:01.997 LIB libspdk_bdev_delay.a 00:05:01.997 SO libspdk_bdev_zone_block.so.6.0 00:05:01.997 SO libspdk_bdev_malloc.so.6.0 00:05:01.997 LIB libspdk_bdev_iscsi.a 00:05:01.997 SO libspdk_bdev_aio.so.6.0 00:05:01.997 SO libspdk_bdev_delay.so.6.0 00:05:01.997 SO libspdk_bdev_iscsi.so.6.0 00:05:01.997 SYMLINK libspdk_bdev_passthru.so 00:05:01.997 SYMLINK libspdk_bdev_zone_block.so 00:05:01.997 SYMLINK libspdk_bdev_malloc.so 00:05:01.997 SYMLINK libspdk_bdev_aio.so 00:05:01.997 SYMLINK libspdk_bdev_delay.so 00:05:01.997 SYMLINK libspdk_bdev_iscsi.so 00:05:01.997 LIB libspdk_bdev_lvol.a 00:05:02.258 SO libspdk_bdev_lvol.so.6.0 00:05:02.258 LIB libspdk_bdev_virtio.a 00:05:02.258 SO libspdk_bdev_virtio.so.6.0 00:05:02.258 SYMLINK libspdk_bdev_lvol.so 00:05:02.258 SYMLINK libspdk_bdev_virtio.so 00:05:02.829 LIB libspdk_bdev_raid.a 00:05:02.829 SO libspdk_bdev_raid.so.6.0 00:05:02.829 SYMLINK libspdk_bdev_raid.so 00:05:04.216 LIB libspdk_bdev_nvme.a 00:05:04.216 SO libspdk_bdev_nvme.so.7.0 00:05:04.216 SYMLINK libspdk_bdev_nvme.so 00:05:04.962 CC module/event/subsystems/iobuf/iobuf.o 00:05:04.962 CC module/event/subsystems/scheduler/scheduler.o 00:05:04.962 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:04.962 CC module/event/subsystems/keyring/keyring.o 00:05:04.962 CC module/event/subsystems/vmd/vmd.o 00:05:04.962 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:04.962 CC module/event/subsystems/sock/sock.o 00:05:04.962 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:04.962 CC module/event/subsystems/fsdev/fsdev.o 00:05:05.270 LIB libspdk_event_keyring.a 00:05:05.270 LIB libspdk_event_vhost_blk.a 00:05:05.270 LIB libspdk_event_sock.a 00:05:05.270 LIB libspdk_event_scheduler.a 00:05:05.270 LIB libspdk_event_iobuf.a 00:05:05.270 LIB libspdk_event_vmd.a 00:05:05.270 LIB libspdk_event_fsdev.a 00:05:05.270 SO libspdk_event_keyring.so.1.0 00:05:05.270 SO libspdk_event_vhost_blk.so.3.0 00:05:05.270 SO libspdk_event_scheduler.so.4.0 00:05:05.270 SO libspdk_event_sock.so.5.0 00:05:05.270 SO libspdk_event_iobuf.so.3.0 00:05:05.270 SO libspdk_event_fsdev.so.1.0 00:05:05.270 SO libspdk_event_vmd.so.6.0 00:05:05.270 SYMLINK libspdk_event_keyring.so 00:05:05.270 SYMLINK libspdk_event_vhost_blk.so 00:05:05.270 SYMLINK libspdk_event_scheduler.so 00:05:05.270 SYMLINK libspdk_event_sock.so 00:05:05.270 SYMLINK libspdk_event_iobuf.so 00:05:05.270 SYMLINK libspdk_event_fsdev.so 00:05:05.270 SYMLINK libspdk_event_vmd.so 00:05:05.566 CC module/event/subsystems/accel/accel.o 00:05:05.828 LIB libspdk_event_accel.a 00:05:05.828 SO libspdk_event_accel.so.6.0 00:05:05.828 SYMLINK libspdk_event_accel.so 00:05:06.400 CC module/event/subsystems/bdev/bdev.o 00:05:06.400 LIB libspdk_event_bdev.a 00:05:06.400 SO libspdk_event_bdev.so.6.0 00:05:06.400 SYMLINK libspdk_event_bdev.so 00:05:06.973 CC module/event/subsystems/ublk/ublk.o 00:05:06.973 CC module/event/subsystems/nbd/nbd.o 00:05:06.973 CC module/event/subsystems/scsi/scsi.o 00:05:06.973 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:06.973 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:06.973 LIB libspdk_event_ublk.a 00:05:06.973 LIB libspdk_event_nbd.a 00:05:06.973 LIB libspdk_event_scsi.a 00:05:06.973 SO libspdk_event_ublk.so.3.0 00:05:06.973 SO libspdk_event_nbd.so.6.0 00:05:06.973 SO libspdk_event_scsi.so.6.0 00:05:07.234 SYMLINK libspdk_event_ublk.so 00:05:07.234 LIB libspdk_event_nvmf.a 00:05:07.234 SYMLINK libspdk_event_nbd.so 00:05:07.234 SYMLINK libspdk_event_scsi.so 00:05:07.234 SO libspdk_event_nvmf.so.6.0 00:05:07.234 SYMLINK libspdk_event_nvmf.so 00:05:07.495 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:07.495 CC module/event/subsystems/iscsi/iscsi.o 00:05:07.757 LIB libspdk_event_vhost_scsi.a 00:05:07.757 LIB libspdk_event_iscsi.a 00:05:07.757 SO libspdk_event_vhost_scsi.so.3.0 00:05:07.757 SO libspdk_event_iscsi.so.6.0 00:05:07.757 SYMLINK libspdk_event_vhost_scsi.so 00:05:07.757 SYMLINK libspdk_event_iscsi.so 00:05:08.018 SO libspdk.so.6.0 00:05:08.018 SYMLINK libspdk.so 00:05:08.278 CXX app/trace/trace.o 00:05:08.278 CC app/trace_record/trace_record.o 00:05:08.540 CC app/spdk_nvme_perf/perf.o 00:05:08.540 CC app/spdk_lspci/spdk_lspci.o 00:05:08.540 CC app/spdk_nvme_discover/discovery_aer.o 00:05:08.540 CC app/spdk_top/spdk_top.o 00:05:08.540 TEST_HEADER include/spdk/accel.h 00:05:08.540 TEST_HEADER include/spdk/accel_module.h 00:05:08.540 TEST_HEADER include/spdk/barrier.h 00:05:08.540 TEST_HEADER include/spdk/assert.h 00:05:08.540 TEST_HEADER include/spdk/base64.h 00:05:08.540 CC test/rpc_client/rpc_client_test.o 00:05:08.540 CC app/spdk_nvme_identify/identify.o 00:05:08.540 TEST_HEADER include/spdk/bdev.h 00:05:08.540 TEST_HEADER include/spdk/bdev_module.h 00:05:08.540 TEST_HEADER include/spdk/bdev_zone.h 00:05:08.540 TEST_HEADER include/spdk/bit_array.h 00:05:08.540 TEST_HEADER include/spdk/bit_pool.h 00:05:08.540 TEST_HEADER include/spdk/blob_bdev.h 00:05:08.540 TEST_HEADER include/spdk/blobfs.h 00:05:08.540 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:08.540 TEST_HEADER include/spdk/blob.h 00:05:08.540 TEST_HEADER include/spdk/conf.h 00:05:08.540 TEST_HEADER include/spdk/config.h 00:05:08.540 TEST_HEADER include/spdk/cpuset.h 00:05:08.540 TEST_HEADER include/spdk/crc16.h 00:05:08.540 TEST_HEADER include/spdk/crc32.h 00:05:08.540 TEST_HEADER include/spdk/crc64.h 00:05:08.540 TEST_HEADER include/spdk/dif.h 00:05:08.540 TEST_HEADER include/spdk/dma.h 00:05:08.540 TEST_HEADER include/spdk/endian.h 00:05:08.540 TEST_HEADER include/spdk/env.h 00:05:08.540 TEST_HEADER include/spdk/env_dpdk.h 00:05:08.540 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:08.540 TEST_HEADER include/spdk/event.h 00:05:08.540 TEST_HEADER include/spdk/fd_group.h 00:05:08.540 TEST_HEADER include/spdk/fd.h 00:05:08.540 TEST_HEADER include/spdk/fsdev.h 00:05:08.540 TEST_HEADER include/spdk/file.h 00:05:08.540 TEST_HEADER include/spdk/fsdev_module.h 00:05:08.540 TEST_HEADER include/spdk/ftl.h 00:05:08.540 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:08.540 TEST_HEADER include/spdk/gpt_spec.h 00:05:08.540 TEST_HEADER include/spdk/hexlify.h 00:05:08.540 TEST_HEADER include/spdk/histogram_data.h 00:05:08.540 TEST_HEADER include/spdk/idxd.h 00:05:08.540 CC app/spdk_dd/spdk_dd.o 00:05:08.540 TEST_HEADER include/spdk/idxd_spec.h 00:05:08.540 TEST_HEADER include/spdk/init.h 00:05:08.540 TEST_HEADER include/spdk/ioat.h 00:05:08.540 TEST_HEADER include/spdk/iscsi_spec.h 00:05:08.540 TEST_HEADER include/spdk/ioat_spec.h 00:05:08.540 CC app/iscsi_tgt/iscsi_tgt.o 00:05:08.540 TEST_HEADER include/spdk/json.h 00:05:08.540 CC app/nvmf_tgt/nvmf_main.o 00:05:08.540 TEST_HEADER include/spdk/jsonrpc.h 00:05:08.540 TEST_HEADER include/spdk/keyring.h 00:05:08.540 TEST_HEADER include/spdk/keyring_module.h 00:05:08.540 TEST_HEADER include/spdk/likely.h 00:05:08.540 TEST_HEADER include/spdk/log.h 00:05:08.540 TEST_HEADER include/spdk/lvol.h 00:05:08.540 TEST_HEADER include/spdk/mmio.h 00:05:08.540 TEST_HEADER include/spdk/md5.h 00:05:08.540 TEST_HEADER include/spdk/memory.h 00:05:08.540 TEST_HEADER include/spdk/nbd.h 00:05:08.540 TEST_HEADER include/spdk/net.h 00:05:08.540 TEST_HEADER include/spdk/notify.h 00:05:08.540 TEST_HEADER include/spdk/nvme.h 00:05:08.540 TEST_HEADER include/spdk/nvme_intel.h 00:05:08.540 CC app/spdk_tgt/spdk_tgt.o 00:05:08.540 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:08.540 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:08.540 TEST_HEADER include/spdk/nvme_spec.h 00:05:08.540 TEST_HEADER include/spdk/nvme_zns.h 00:05:08.540 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:08.540 TEST_HEADER include/spdk/nvmf.h 00:05:08.540 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:08.540 TEST_HEADER include/spdk/nvmf_spec.h 00:05:08.540 TEST_HEADER include/spdk/nvmf_transport.h 00:05:08.540 TEST_HEADER include/spdk/opal.h 00:05:08.540 TEST_HEADER include/spdk/opal_spec.h 00:05:08.540 TEST_HEADER include/spdk/pci_ids.h 00:05:08.540 TEST_HEADER include/spdk/pipe.h 00:05:08.540 TEST_HEADER include/spdk/queue.h 00:05:08.540 TEST_HEADER include/spdk/reduce.h 00:05:08.540 TEST_HEADER include/spdk/rpc.h 00:05:08.540 TEST_HEADER include/spdk/scheduler.h 00:05:08.540 TEST_HEADER include/spdk/scsi.h 00:05:08.540 TEST_HEADER include/spdk/scsi_spec.h 00:05:08.540 TEST_HEADER include/spdk/sock.h 00:05:08.540 TEST_HEADER include/spdk/stdinc.h 00:05:08.540 TEST_HEADER include/spdk/string.h 00:05:08.540 TEST_HEADER include/spdk/thread.h 00:05:08.540 TEST_HEADER include/spdk/trace.h 00:05:08.540 TEST_HEADER include/spdk/trace_parser.h 00:05:08.540 TEST_HEADER include/spdk/tree.h 00:05:08.540 TEST_HEADER include/spdk/ublk.h 00:05:08.540 TEST_HEADER include/spdk/util.h 00:05:08.540 TEST_HEADER include/spdk/uuid.h 00:05:08.540 TEST_HEADER include/spdk/version.h 00:05:08.540 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:08.540 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:08.540 TEST_HEADER include/spdk/vmd.h 00:05:08.540 TEST_HEADER include/spdk/vhost.h 00:05:08.540 TEST_HEADER include/spdk/xor.h 00:05:08.540 TEST_HEADER include/spdk/zipf.h 00:05:08.540 CXX test/cpp_headers/accel.o 00:05:08.540 CXX test/cpp_headers/accel_module.o 00:05:08.540 CXX test/cpp_headers/assert.o 00:05:08.540 CXX test/cpp_headers/barrier.o 00:05:08.540 CXX test/cpp_headers/base64.o 00:05:08.541 CXX test/cpp_headers/bdev.o 00:05:08.541 CXX test/cpp_headers/bit_pool.o 00:05:08.541 CXX test/cpp_headers/bdev_module.o 00:05:08.541 CXX test/cpp_headers/bdev_zone.o 00:05:08.541 CXX test/cpp_headers/blob_bdev.o 00:05:08.541 CXX test/cpp_headers/bit_array.o 00:05:08.541 CXX test/cpp_headers/blobfs_bdev.o 00:05:08.541 CXX test/cpp_headers/blobfs.o 00:05:08.541 CXX test/cpp_headers/blob.o 00:05:08.541 CXX test/cpp_headers/conf.o 00:05:08.541 CXX test/cpp_headers/config.o 00:05:08.541 CXX test/cpp_headers/cpuset.o 00:05:08.541 CXX test/cpp_headers/crc16.o 00:05:08.541 CXX test/cpp_headers/crc64.o 00:05:08.541 CXX test/cpp_headers/crc32.o 00:05:08.541 CXX test/cpp_headers/dif.o 00:05:08.541 CXX test/cpp_headers/dma.o 00:05:08.541 CXX test/cpp_headers/endian.o 00:05:08.541 CXX test/cpp_headers/env_dpdk.o 00:05:08.541 CXX test/cpp_headers/fd.o 00:05:08.541 CXX test/cpp_headers/env.o 00:05:08.541 CXX test/cpp_headers/event.o 00:05:08.541 CXX test/cpp_headers/file.o 00:05:08.541 CXX test/cpp_headers/fd_group.o 00:05:08.541 CXX test/cpp_headers/fsdev.o 00:05:08.541 CXX test/cpp_headers/ftl.o 00:05:08.541 CXX test/cpp_headers/fsdev_module.o 00:05:08.541 CXX test/cpp_headers/fuse_dispatcher.o 00:05:08.541 CXX test/cpp_headers/hexlify.o 00:05:08.541 CXX test/cpp_headers/gpt_spec.o 00:05:08.541 CXX test/cpp_headers/idxd_spec.o 00:05:08.541 CXX test/cpp_headers/histogram_data.o 00:05:08.541 CXX test/cpp_headers/idxd.o 00:05:08.541 CXX test/cpp_headers/init.o 00:05:08.541 CXX test/cpp_headers/ioat.o 00:05:08.541 CXX test/cpp_headers/iscsi_spec.o 00:05:08.541 CXX test/cpp_headers/ioat_spec.o 00:05:08.541 CXX test/cpp_headers/json.o 00:05:08.541 CXX test/cpp_headers/keyring_module.o 00:05:08.541 CXX test/cpp_headers/keyring.o 00:05:08.541 CXX test/cpp_headers/log.o 00:05:08.541 CXX test/cpp_headers/jsonrpc.o 00:05:08.541 CXX test/cpp_headers/likely.o 00:05:08.541 CC examples/util/zipf/zipf.o 00:05:08.541 CXX test/cpp_headers/memory.o 00:05:08.541 CXX test/cpp_headers/mmio.o 00:05:08.541 CXX test/cpp_headers/md5.o 00:05:08.541 CXX test/cpp_headers/lvol.o 00:05:08.541 CXX test/cpp_headers/nbd.o 00:05:08.541 CXX test/cpp_headers/notify.o 00:05:08.541 CXX test/cpp_headers/nvme.o 00:05:08.541 CXX test/cpp_headers/net.o 00:05:08.541 CXX test/cpp_headers/nvme_spec.o 00:05:08.541 CXX test/cpp_headers/nvme_intel.o 00:05:08.541 CXX test/cpp_headers/nvme_ocssd.o 00:05:08.541 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:08.541 CXX test/cpp_headers/nvmf.o 00:05:08.541 CC test/app/stub/stub.o 00:05:08.541 CXX test/cpp_headers/nvme_zns.o 00:05:08.541 CXX test/cpp_headers/nvmf_cmd.o 00:05:08.541 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:08.541 CXX test/cpp_headers/nvmf_transport.o 00:05:08.541 CXX test/cpp_headers/nvmf_spec.o 00:05:08.541 CC test/thread/poller_perf/poller_perf.o 00:05:08.541 CC examples/ioat/perf/perf.o 00:05:08.541 LINK spdk_lspci 00:05:08.541 CXX test/cpp_headers/pci_ids.o 00:05:08.541 CXX test/cpp_headers/opal.o 00:05:08.541 CXX test/cpp_headers/queue.o 00:05:08.541 CXX test/cpp_headers/pipe.o 00:05:08.541 CXX test/cpp_headers/opal_spec.o 00:05:08.541 CXX test/cpp_headers/reduce.o 00:05:08.541 CXX test/cpp_headers/scsi.o 00:05:08.541 CXX test/cpp_headers/rpc.o 00:05:08.541 CXX test/cpp_headers/scheduler.o 00:05:08.541 CXX test/cpp_headers/scsi_spec.o 00:05:08.541 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:08.541 CC app/fio/nvme/fio_plugin.o 00:05:08.541 CC test/app/jsoncat/jsoncat.o 00:05:08.541 CXX test/cpp_headers/sock.o 00:05:08.541 CC test/env/pci/pci_ut.o 00:05:08.541 CXX test/cpp_headers/thread.o 00:05:08.541 CC examples/ioat/verify/verify.o 00:05:08.541 CXX test/cpp_headers/stdinc.o 00:05:08.541 CXX test/cpp_headers/trace.o 00:05:08.541 CXX test/cpp_headers/string.o 00:05:08.541 CC test/env/memory/memory_ut.o 00:05:08.541 CXX test/cpp_headers/trace_parser.o 00:05:08.541 CXX test/cpp_headers/tree.o 00:05:08.541 CC test/env/vtophys/vtophys.o 00:05:08.541 CXX test/cpp_headers/ublk.o 00:05:08.541 CXX test/cpp_headers/util.o 00:05:08.541 CXX test/cpp_headers/uuid.o 00:05:08.541 CXX test/cpp_headers/version.o 00:05:08.541 CXX test/cpp_headers/vfio_user_pci.o 00:05:08.541 CXX test/cpp_headers/vmd.o 00:05:08.541 CXX test/cpp_headers/vfio_user_spec.o 00:05:08.804 CC test/app/histogram_perf/histogram_perf.o 00:05:08.804 CXX test/cpp_headers/vhost.o 00:05:08.804 CXX test/cpp_headers/xor.o 00:05:08.804 CXX test/cpp_headers/zipf.o 00:05:08.804 CC test/app/bdev_svc/bdev_svc.o 00:05:08.804 CC app/fio/bdev/fio_plugin.o 00:05:08.804 CC test/dma/test_dma/test_dma.o 00:05:08.804 LINK spdk_nvme_discover 00:05:08.804 LINK interrupt_tgt 00:05:08.804 LINK rpc_client_test 00:05:08.804 LINK iscsi_tgt 00:05:08.804 LINK nvmf_tgt 00:05:08.804 LINK spdk_tgt 00:05:09.062 CC test/env/mem_callbacks/mem_callbacks.o 00:05:09.062 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:09.062 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:09.062 LINK spdk_trace_record 00:05:09.062 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:09.062 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:09.062 LINK spdk_trace 00:05:09.321 LINK jsoncat 00:05:09.321 LINK bdev_svc 00:05:09.321 LINK ioat_perf 00:05:09.321 LINK verify 00:05:09.321 LINK zipf 00:05:09.321 LINK histogram_perf 00:05:09.321 LINK spdk_dd 00:05:09.321 LINK env_dpdk_post_init 00:05:09.321 LINK poller_perf 00:05:09.321 LINK vtophys 00:05:09.321 LINK stub 00:05:09.582 CC app/vhost/vhost.o 00:05:09.582 LINK nvme_fuzz 00:05:09.582 LINK pci_ut 00:05:09.582 CC examples/idxd/perf/perf.o 00:05:09.582 CC examples/vmd/lsvmd/lsvmd.o 00:05:09.582 LINK vhost_fuzz 00:05:09.582 CC examples/sock/hello_world/hello_sock.o 00:05:09.582 CC examples/vmd/led/led.o 00:05:09.843 LINK vhost 00:05:09.843 LINK test_dma 00:05:09.843 CC test/event/reactor/reactor.o 00:05:09.843 CC examples/thread/thread/thread_ex.o 00:05:09.843 CC test/event/reactor_perf/reactor_perf.o 00:05:09.843 CC test/event/event_perf/event_perf.o 00:05:09.843 LINK spdk_nvme 00:05:09.843 LINK spdk_bdev 00:05:09.843 CC test/event/app_repeat/app_repeat.o 00:05:09.843 LINK spdk_top 00:05:09.843 LINK mem_callbacks 00:05:09.843 CC test/event/scheduler/scheduler.o 00:05:09.843 LINK lsvmd 00:05:09.843 LINK led 00:05:09.843 LINK reactor_perf 00:05:09.843 LINK reactor 00:05:09.843 LINK event_perf 00:05:09.843 LINK spdk_nvme_perf 00:05:09.843 LINK spdk_nvme_identify 00:05:10.103 LINK app_repeat 00:05:10.103 LINK hello_sock 00:05:10.103 LINK thread 00:05:10.103 LINK idxd_perf 00:05:10.103 LINK scheduler 00:05:10.364 CC test/nvme/aer/aer.o 00:05:10.364 CC test/nvme/err_injection/err_injection.o 00:05:10.364 CC test/nvme/startup/startup.o 00:05:10.364 CC test/nvme/sgl/sgl.o 00:05:10.364 CC test/nvme/fused_ordering/fused_ordering.o 00:05:10.364 CC test/nvme/boot_partition/boot_partition.o 00:05:10.364 CC test/nvme/cuse/cuse.o 00:05:10.364 CC test/nvme/reserve/reserve.o 00:05:10.364 CC test/nvme/e2edp/nvme_dp.o 00:05:10.364 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:10.364 CC test/nvme/overhead/overhead.o 00:05:10.364 CC test/blobfs/mkfs/mkfs.o 00:05:10.364 CC test/nvme/reset/reset.o 00:05:10.364 CC test/nvme/fdp/fdp.o 00:05:10.364 CC test/nvme/compliance/nvme_compliance.o 00:05:10.364 CC test/accel/dif/dif.o 00:05:10.364 CC test/nvme/connect_stress/connect_stress.o 00:05:10.364 CC test/nvme/simple_copy/simple_copy.o 00:05:10.364 LINK memory_ut 00:05:10.364 CC test/lvol/esnap/esnap.o 00:05:10.365 LINK startup 00:05:10.365 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:10.626 LINK err_injection 00:05:10.626 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:10.626 LINK boot_partition 00:05:10.626 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:10.626 LINK connect_stress 00:05:10.626 LINK fused_ordering 00:05:10.626 CC examples/nvme/hotplug/hotplug.o 00:05:10.626 CC examples/nvme/arbitration/arbitration.o 00:05:10.626 CC examples/nvme/hello_world/hello_world.o 00:05:10.626 CC examples/nvme/reconnect/reconnect.o 00:05:10.626 CC examples/nvme/abort/abort.o 00:05:10.626 LINK doorbell_aers 00:05:10.626 LINK reserve 00:05:10.626 LINK mkfs 00:05:10.626 LINK aer 00:05:10.626 LINK reset 00:05:10.626 LINK simple_copy 00:05:10.626 CC examples/accel/perf/accel_perf.o 00:05:10.626 LINK nvme_dp 00:05:10.626 LINK sgl 00:05:10.626 CC examples/blob/cli/blobcli.o 00:05:10.626 LINK overhead 00:05:10.626 CC examples/blob/hello_world/hello_blob.o 00:05:10.626 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:10.626 LINK fdp 00:05:10.626 LINK cmb_copy 00:05:10.626 LINK pmr_persistence 00:05:10.626 LINK nvme_compliance 00:05:10.885 LINK hello_world 00:05:10.885 LINK hotplug 00:05:10.885 LINK arbitration 00:05:10.885 LINK reconnect 00:05:10.885 LINK hello_blob 00:05:10.885 LINK abort 00:05:10.885 LINK hello_fsdev 00:05:11.145 LINK iscsi_fuzz 00:05:11.145 LINK nvme_manage 00:05:11.145 LINK dif 00:05:11.145 LINK blobcli 00:05:11.145 LINK accel_perf 00:05:11.717 CC test/bdev/bdevio/bdevio.o 00:05:11.717 LINK cuse 00:05:11.717 CC examples/bdev/hello_world/hello_bdev.o 00:05:11.717 CC examples/bdev/bdevperf/bdevperf.o 00:05:11.977 LINK hello_bdev 00:05:11.977 LINK bdevio 00:05:12.549 LINK bdevperf 00:05:13.493 CC examples/nvmf/nvmf/nvmf.o 00:05:13.755 LINK nvmf 00:05:15.744 LINK esnap 00:05:16.004 00:05:16.004 real 0m57.599s 00:05:16.004 user 8m13.182s 00:05:16.004 sys 4m15.048s 00:05:16.004 14:15:39 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:16.004 14:15:39 make -- common/autotest_common.sh@10 -- $ set +x 00:05:16.004 ************************************ 00:05:16.004 END TEST make 00:05:16.004 ************************************ 00:05:16.265 14:15:39 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:16.265 14:15:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:16.265 14:15:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:16.265 14:15:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:16.265 14:15:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:16.265 14:15:39 -- pm/common@44 -- $ pid=2674093 00:05:16.265 14:15:39 -- pm/common@50 -- $ kill -TERM 2674093 00:05:16.265 14:15:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:16.265 14:15:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:16.265 14:15:39 -- pm/common@44 -- $ pid=2674094 00:05:16.265 14:15:39 -- pm/common@50 -- $ kill -TERM 2674094 00:05:16.265 14:15:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:16.265 14:15:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:16.265 14:15:39 -- pm/common@44 -- $ pid=2674095 00:05:16.265 14:15:39 -- pm/common@50 -- $ kill -TERM 2674095 00:05:16.265 14:15:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:16.265 14:15:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:16.265 14:15:39 -- pm/common@44 -- $ pid=2674119 00:05:16.265 14:15:39 -- pm/common@50 -- $ sudo -E kill -TERM 2674119 00:05:16.265 14:15:39 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:16.265 14:15:39 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:16.265 14:15:39 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:16.266 14:15:39 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:16.266 14:15:39 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.266 14:15:39 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.266 14:15:39 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.266 14:15:39 -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.266 14:15:39 -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.266 14:15:39 -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.266 14:15:39 -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.266 14:15:39 -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.266 14:15:39 -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.266 14:15:39 -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.266 14:15:39 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.266 14:15:39 -- scripts/common.sh@344 -- # case "$op" in 00:05:16.266 14:15:39 -- scripts/common.sh@345 -- # : 1 00:05:16.266 14:15:39 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.266 14:15:39 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.266 14:15:39 -- scripts/common.sh@365 -- # decimal 1 00:05:16.266 14:15:39 -- scripts/common.sh@353 -- # local d=1 00:05:16.266 14:15:39 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.266 14:15:39 -- scripts/common.sh@355 -- # echo 1 00:05:16.527 14:15:39 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.527 14:15:39 -- scripts/common.sh@366 -- # decimal 2 00:05:16.527 14:15:39 -- scripts/common.sh@353 -- # local d=2 00:05:16.527 14:15:39 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.527 14:15:39 -- scripts/common.sh@355 -- # echo 2 00:05:16.527 14:15:39 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.527 14:15:39 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.527 14:15:39 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.527 14:15:39 -- scripts/common.sh@368 -- # return 0 00:05:16.527 14:15:39 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.527 14:15:39 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:16.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.527 --rc genhtml_branch_coverage=1 00:05:16.527 --rc genhtml_function_coverage=1 00:05:16.527 --rc genhtml_legend=1 00:05:16.527 --rc geninfo_all_blocks=1 00:05:16.527 --rc geninfo_unexecuted_blocks=1 00:05:16.527 00:05:16.527 ' 00:05:16.527 14:15:39 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:16.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.527 --rc genhtml_branch_coverage=1 00:05:16.527 --rc genhtml_function_coverage=1 00:05:16.527 --rc genhtml_legend=1 00:05:16.527 --rc geninfo_all_blocks=1 00:05:16.527 --rc geninfo_unexecuted_blocks=1 00:05:16.527 00:05:16.527 ' 00:05:16.527 14:15:39 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:16.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.527 --rc genhtml_branch_coverage=1 00:05:16.527 --rc genhtml_function_coverage=1 00:05:16.527 --rc genhtml_legend=1 00:05:16.527 --rc geninfo_all_blocks=1 00:05:16.527 --rc geninfo_unexecuted_blocks=1 00:05:16.527 00:05:16.527 ' 00:05:16.527 14:15:39 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:16.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.527 --rc genhtml_branch_coverage=1 00:05:16.527 --rc genhtml_function_coverage=1 00:05:16.527 --rc genhtml_legend=1 00:05:16.527 --rc geninfo_all_blocks=1 00:05:16.527 --rc geninfo_unexecuted_blocks=1 00:05:16.527 00:05:16.527 ' 00:05:16.527 14:15:39 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:16.527 14:15:39 -- nvmf/common.sh@7 -- # uname -s 00:05:16.527 14:15:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:16.527 14:15:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:16.527 14:15:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:16.527 14:15:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:16.527 14:15:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:16.527 14:15:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:16.527 14:15:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:16.527 14:15:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:16.527 14:15:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:16.527 14:15:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:16.527 14:15:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:16.527 14:15:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:16.527 14:15:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:16.527 14:15:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:16.527 14:15:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:16.527 14:15:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:16.527 14:15:40 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:16.527 14:15:40 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:16.527 14:15:40 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:16.527 14:15:40 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:16.527 14:15:40 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:16.527 14:15:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.527 14:15:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.527 14:15:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.527 14:15:40 -- paths/export.sh@5 -- # export PATH 00:05:16.527 14:15:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.527 14:15:40 -- nvmf/common.sh@51 -- # : 0 00:05:16.527 14:15:40 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:16.527 14:15:40 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:16.527 14:15:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:16.527 14:15:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:16.527 14:15:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:16.527 14:15:40 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:16.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:16.527 14:15:40 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:16.527 14:15:40 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:16.527 14:15:40 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:16.527 14:15:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:16.527 14:15:40 -- spdk/autotest.sh@32 -- # uname -s 00:05:16.527 14:15:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:16.527 14:15:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:16.527 14:15:40 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:16.527 14:15:40 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:16.527 14:15:40 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:16.527 14:15:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:16.527 14:15:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:16.527 14:15:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:16.527 14:15:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:16.527 14:15:40 -- spdk/autotest.sh@48 -- # udevadm_pid=2740079 00:05:16.527 14:15:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:16.527 14:15:40 -- pm/common@17 -- # local monitor 00:05:16.527 14:15:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:16.527 14:15:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:16.527 14:15:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:16.527 14:15:40 -- pm/common@21 -- # date +%s 00:05:16.527 14:15:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:16.527 14:15:40 -- pm/common@21 -- # date +%s 00:05:16.527 14:15:40 -- pm/common@25 -- # sleep 1 00:05:16.527 14:15:40 -- pm/common@21 -- # date +%s 00:05:16.527 14:15:40 -- pm/common@21 -- # date +%s 00:05:16.527 14:15:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728303340 00:05:16.527 14:15:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728303340 00:05:16.527 14:15:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728303340 00:05:16.527 14:15:40 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728303340 00:05:16.527 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728303340_collect-cpu-load.pm.log 00:05:16.527 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728303340_collect-vmstat.pm.log 00:05:16.527 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728303340_collect-cpu-temp.pm.log 00:05:16.527 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728303340_collect-bmc-pm.bmc.pm.log 00:05:17.466 14:15:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:17.466 14:15:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:17.466 14:15:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:17.466 14:15:41 -- common/autotest_common.sh@10 -- # set +x 00:05:17.466 14:15:41 -- spdk/autotest.sh@59 -- # create_test_list 00:05:17.466 14:15:41 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:17.466 14:15:41 -- common/autotest_common.sh@10 -- # set +x 00:05:17.466 14:15:41 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:17.466 14:15:41 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:17.466 14:15:41 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:17.466 14:15:41 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:17.466 14:15:41 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:17.466 14:15:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:17.466 14:15:41 -- common/autotest_common.sh@1455 -- # uname 00:05:17.466 14:15:41 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:17.466 14:15:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:17.466 14:15:41 -- common/autotest_common.sh@1475 -- # uname 00:05:17.466 14:15:41 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:17.466 14:15:41 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:17.466 14:15:41 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:17.726 lcov: LCOV version 1.15 00:05:17.726 14:15:41 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:32.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:32.634 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:47.548 14:16:10 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:47.548 14:16:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:47.548 14:16:10 -- common/autotest_common.sh@10 -- # set +x 00:05:47.548 14:16:10 -- spdk/autotest.sh@78 -- # rm -f 00:05:47.548 14:16:10 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:50.853 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:05:50.853 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:05:50.853 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:05:50.853 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:05:50.853 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:05:50.853 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:05:50.853 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:05:50.853 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:05:50.853 0000:65:00.0 (144d a80a): Already using the nvme driver 00:05:51.114 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:05:51.114 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:05:51.114 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:05:51.114 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:05:51.114 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:05:51.114 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:05:51.114 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:05:51.114 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:05:51.376 14:16:15 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:51.376 14:16:15 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:51.376 14:16:15 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:51.376 14:16:15 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:51.376 14:16:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:51.376 14:16:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:51.376 14:16:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:51.376 14:16:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:51.376 14:16:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:51.376 14:16:15 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:51.376 14:16:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:51.376 14:16:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:51.376 14:16:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:51.376 14:16:15 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:51.376 14:16:15 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:51.376 No valid GPT data, bailing 00:05:51.638 14:16:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:51.638 14:16:15 -- scripts/common.sh@394 -- # pt= 00:05:51.638 14:16:15 -- scripts/common.sh@395 -- # return 1 00:05:51.638 14:16:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:51.638 1+0 records in 00:05:51.638 1+0 records out 00:05:51.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0040764 s, 257 MB/s 00:05:51.638 14:16:15 -- spdk/autotest.sh@105 -- # sync 00:05:51.638 14:16:15 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:51.638 14:16:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:51.638 14:16:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:59.784 14:16:23 -- spdk/autotest.sh@111 -- # uname -s 00:05:59.784 14:16:23 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:59.784 14:16:23 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:59.784 14:16:23 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:03.996 Hugepages 00:06:03.996 node hugesize free / total 00:06:03.996 node0 1048576kB 0 / 0 00:06:03.996 node0 2048kB 0 / 0 00:06:03.996 node1 1048576kB 0 / 0 00:06:03.996 node1 2048kB 0 / 0 00:06:03.996 00:06:03.996 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:03.996 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:06:03.996 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:06:03.996 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:06:03.996 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:06:03.996 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:06:03.996 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:06:03.996 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:06:03.996 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:06:03.996 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:06:03.996 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:06:03.996 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:06:03.996 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:06:03.996 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:06:03.996 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:06:03.996 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:06:03.996 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:06:03.996 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:06:03.996 14:16:27 -- spdk/autotest.sh@117 -- # uname -s 00:06:03.996 14:16:27 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:03.996 14:16:27 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:03.996 14:16:27 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:07.301 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:07.301 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:07.301 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:07.301 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:07.301 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:07.301 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:07.301 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:07.301 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:07.301 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:07.301 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:07.301 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:07.301 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:07.301 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:07.301 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:07.301 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:07.301 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:09.214 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:09.474 14:16:32 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:10.415 14:16:33 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:10.415 14:16:33 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:10.415 14:16:33 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:10.415 14:16:33 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:10.415 14:16:33 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:10.415 14:16:33 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:10.415 14:16:33 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:10.415 14:16:33 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:10.415 14:16:33 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:10.415 14:16:34 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:06:10.415 14:16:34 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:06:10.415 14:16:34 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:13.714 Waiting for block devices as requested 00:06:13.974 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:13.974 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:13.974 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:14.235 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:14.235 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:14.235 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:14.495 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:14.495 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:14.495 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:06:14.765 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:14.765 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:14.765 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:15.026 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:15.026 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:15.026 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:15.026 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:15.286 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:15.546 14:16:39 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:15.546 14:16:39 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:06:15.546 14:16:39 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:06:15.546 14:16:39 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:06:15.546 14:16:39 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:15.546 14:16:39 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:06:15.546 14:16:39 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:15.546 14:16:39 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:15.546 14:16:39 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:15.546 14:16:39 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:15.546 14:16:39 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:15.546 14:16:39 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:15.546 14:16:39 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:15.546 14:16:39 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:06:15.546 14:16:39 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:15.546 14:16:39 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:15.546 14:16:39 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:15.546 14:16:39 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:15.546 14:16:39 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:15.546 14:16:39 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:15.546 14:16:39 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:15.546 14:16:39 -- common/autotest_common.sh@1541 -- # continue 00:06:15.546 14:16:39 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:15.546 14:16:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.546 14:16:39 -- common/autotest_common.sh@10 -- # set +x 00:06:15.546 14:16:39 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:15.546 14:16:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:15.546 14:16:39 -- common/autotest_common.sh@10 -- # set +x 00:06:15.546 14:16:39 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:19.749 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:19.749 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:19.749 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:19.749 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:19.749 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:19.749 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:19.749 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:19.749 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:19.749 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:19.749 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:19.749 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:19.749 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:19.749 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:19.749 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:19.749 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:19.749 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:19.749 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:19.749 14:16:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:19.749 14:16:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:19.749 14:16:43 -- common/autotest_common.sh@10 -- # set +x 00:06:19.749 14:16:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:19.749 14:16:43 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:19.749 14:16:43 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:19.749 14:16:43 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:19.749 14:16:43 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:19.749 14:16:43 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:19.749 14:16:43 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:19.749 14:16:43 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:19.749 14:16:43 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:19.749 14:16:43 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:19.749 14:16:43 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:19.749 14:16:43 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:19.749 14:16:43 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:19.749 14:16:43 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:06:19.749 14:16:43 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:06:19.749 14:16:43 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:19.749 14:16:43 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:06:19.749 14:16:43 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:06:19.749 14:16:43 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:06:19.749 14:16:43 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:19.749 14:16:43 -- common/autotest_common.sh@1570 -- # return 0 00:06:19.749 14:16:43 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:19.749 14:16:43 -- common/autotest_common.sh@1578 -- # return 0 00:06:19.749 14:16:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:19.749 14:16:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:19.749 14:16:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:19.749 14:16:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:19.749 14:16:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:19.749 14:16:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:19.749 14:16:43 -- common/autotest_common.sh@10 -- # set +x 00:06:19.749 14:16:43 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:19.749 14:16:43 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:19.749 14:16:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.749 14:16:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.749 14:16:43 -- common/autotest_common.sh@10 -- # set +x 00:06:19.749 ************************************ 00:06:19.749 START TEST env 00:06:19.749 ************************************ 00:06:19.749 14:16:43 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:20.011 * Looking for test storage... 00:06:20.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:20.011 14:16:43 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:20.011 14:16:43 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:20.011 14:16:43 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:20.011 14:16:43 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:20.011 14:16:43 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.011 14:16:43 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.011 14:16:43 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.011 14:16:43 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.011 14:16:43 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.011 14:16:43 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.011 14:16:43 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.011 14:16:43 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.011 14:16:43 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.011 14:16:43 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.011 14:16:43 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.011 14:16:43 env -- scripts/common.sh@344 -- # case "$op" in 00:06:20.011 14:16:43 env -- scripts/common.sh@345 -- # : 1 00:06:20.011 14:16:43 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.011 14:16:43 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.011 14:16:43 env -- scripts/common.sh@365 -- # decimal 1 00:06:20.011 14:16:43 env -- scripts/common.sh@353 -- # local d=1 00:06:20.011 14:16:43 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.011 14:16:43 env -- scripts/common.sh@355 -- # echo 1 00:06:20.011 14:16:43 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.011 14:16:43 env -- scripts/common.sh@366 -- # decimal 2 00:06:20.011 14:16:43 env -- scripts/common.sh@353 -- # local d=2 00:06:20.011 14:16:43 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.011 14:16:43 env -- scripts/common.sh@355 -- # echo 2 00:06:20.011 14:16:43 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.011 14:16:43 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.011 14:16:43 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.011 14:16:43 env -- scripts/common.sh@368 -- # return 0 00:06:20.011 14:16:43 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.011 14:16:43 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:20.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.011 --rc genhtml_branch_coverage=1 00:06:20.011 --rc genhtml_function_coverage=1 00:06:20.011 --rc genhtml_legend=1 00:06:20.011 --rc geninfo_all_blocks=1 00:06:20.011 --rc geninfo_unexecuted_blocks=1 00:06:20.011 00:06:20.011 ' 00:06:20.011 14:16:43 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:20.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.011 --rc genhtml_branch_coverage=1 00:06:20.011 --rc genhtml_function_coverage=1 00:06:20.011 --rc genhtml_legend=1 00:06:20.011 --rc geninfo_all_blocks=1 00:06:20.011 --rc geninfo_unexecuted_blocks=1 00:06:20.011 00:06:20.011 ' 00:06:20.011 14:16:43 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:20.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.011 --rc genhtml_branch_coverage=1 00:06:20.011 --rc genhtml_function_coverage=1 00:06:20.011 --rc genhtml_legend=1 00:06:20.011 --rc geninfo_all_blocks=1 00:06:20.011 --rc geninfo_unexecuted_blocks=1 00:06:20.011 00:06:20.011 ' 00:06:20.011 14:16:43 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:20.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.011 --rc genhtml_branch_coverage=1 00:06:20.011 --rc genhtml_function_coverage=1 00:06:20.011 --rc genhtml_legend=1 00:06:20.011 --rc geninfo_all_blocks=1 00:06:20.011 --rc geninfo_unexecuted_blocks=1 00:06:20.011 00:06:20.011 ' 00:06:20.011 14:16:43 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:20.011 14:16:43 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.011 14:16:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.011 14:16:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.011 ************************************ 00:06:20.011 START TEST env_memory 00:06:20.011 ************************************ 00:06:20.011 14:16:43 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:20.011 00:06:20.011 00:06:20.011 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.011 http://cunit.sourceforge.net/ 00:06:20.011 00:06:20.011 00:06:20.011 Suite: memory 00:06:20.011 Test: alloc and free memory map ...[2024-10-07 14:16:43.673624] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:20.011 passed 00:06:20.011 Test: mem map translation ...[2024-10-07 14:16:43.715531] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:20.011 [2024-10-07 14:16:43.715572] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:20.011 [2024-10-07 14:16:43.715637] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:20.011 [2024-10-07 14:16:43.715652] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:20.272 passed 00:06:20.272 Test: mem map registration ...[2024-10-07 14:16:43.789291] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:20.272 [2024-10-07 14:16:43.789328] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:20.272 passed 00:06:20.272 Test: mem map adjacent registrations ...passed 00:06:20.272 00:06:20.272 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.272 suites 1 1 n/a 0 0 00:06:20.272 tests 4 4 4 0 0 00:06:20.272 asserts 152 152 152 0 n/a 00:06:20.272 00:06:20.272 Elapsed time = 0.259 seconds 00:06:20.272 00:06:20.272 real 0m0.298s 00:06:20.272 user 0m0.274s 00:06:20.272 sys 0m0.023s 00:06:20.272 14:16:43 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.272 14:16:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:20.272 ************************************ 00:06:20.272 END TEST env_memory 00:06:20.272 ************************************ 00:06:20.272 14:16:43 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:20.272 14:16:43 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.272 14:16:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.272 14:16:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.533 ************************************ 00:06:20.533 START TEST env_vtophys 00:06:20.533 ************************************ 00:06:20.533 14:16:43 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:20.533 EAL: lib.eal log level changed from notice to debug 00:06:20.533 EAL: Detected lcore 0 as core 0 on socket 0 00:06:20.533 EAL: Detected lcore 1 as core 1 on socket 0 00:06:20.533 EAL: Detected lcore 2 as core 2 on socket 0 00:06:20.533 EAL: Detected lcore 3 as core 3 on socket 0 00:06:20.533 EAL: Detected lcore 4 as core 4 on socket 0 00:06:20.533 EAL: Detected lcore 5 as core 5 on socket 0 00:06:20.533 EAL: Detected lcore 6 as core 6 on socket 0 00:06:20.533 EAL: Detected lcore 7 as core 7 on socket 0 00:06:20.533 EAL: Detected lcore 8 as core 8 on socket 0 00:06:20.533 EAL: Detected lcore 9 as core 9 on socket 0 00:06:20.533 EAL: Detected lcore 10 as core 10 on socket 0 00:06:20.533 EAL: Detected lcore 11 as core 11 on socket 0 00:06:20.533 EAL: Detected lcore 12 as core 12 on socket 0 00:06:20.533 EAL: Detected lcore 13 as core 13 on socket 0 00:06:20.533 EAL: Detected lcore 14 as core 14 on socket 0 00:06:20.533 EAL: Detected lcore 15 as core 15 on socket 0 00:06:20.533 EAL: Detected lcore 16 as core 16 on socket 0 00:06:20.533 EAL: Detected lcore 17 as core 17 on socket 0 00:06:20.533 EAL: Detected lcore 18 as core 18 on socket 0 00:06:20.533 EAL: Detected lcore 19 as core 19 on socket 0 00:06:20.533 EAL: Detected lcore 20 as core 20 on socket 0 00:06:20.533 EAL: Detected lcore 21 as core 21 on socket 0 00:06:20.533 EAL: Detected lcore 22 as core 22 on socket 0 00:06:20.533 EAL: Detected lcore 23 as core 23 on socket 0 00:06:20.533 EAL: Detected lcore 24 as core 24 on socket 0 00:06:20.533 EAL: Detected lcore 25 as core 25 on socket 0 00:06:20.533 EAL: Detected lcore 26 as core 26 on socket 0 00:06:20.533 EAL: Detected lcore 27 as core 27 on socket 0 00:06:20.533 EAL: Detected lcore 28 as core 28 on socket 0 00:06:20.533 EAL: Detected lcore 29 as core 29 on socket 0 00:06:20.533 EAL: Detected lcore 30 as core 30 on socket 0 00:06:20.533 EAL: Detected lcore 31 as core 31 on socket 0 00:06:20.533 EAL: Detected lcore 32 as core 32 on socket 0 00:06:20.533 EAL: Detected lcore 33 as core 33 on socket 0 00:06:20.533 EAL: Detected lcore 34 as core 34 on socket 0 00:06:20.533 EAL: Detected lcore 35 as core 35 on socket 0 00:06:20.533 EAL: Detected lcore 36 as core 0 on socket 1 00:06:20.533 EAL: Detected lcore 37 as core 1 on socket 1 00:06:20.533 EAL: Detected lcore 38 as core 2 on socket 1 00:06:20.533 EAL: Detected lcore 39 as core 3 on socket 1 00:06:20.533 EAL: Detected lcore 40 as core 4 on socket 1 00:06:20.533 EAL: Detected lcore 41 as core 5 on socket 1 00:06:20.533 EAL: Detected lcore 42 as core 6 on socket 1 00:06:20.533 EAL: Detected lcore 43 as core 7 on socket 1 00:06:20.533 EAL: Detected lcore 44 as core 8 on socket 1 00:06:20.533 EAL: Detected lcore 45 as core 9 on socket 1 00:06:20.533 EAL: Detected lcore 46 as core 10 on socket 1 00:06:20.533 EAL: Detected lcore 47 as core 11 on socket 1 00:06:20.533 EAL: Detected lcore 48 as core 12 on socket 1 00:06:20.533 EAL: Detected lcore 49 as core 13 on socket 1 00:06:20.533 EAL: Detected lcore 50 as core 14 on socket 1 00:06:20.533 EAL: Detected lcore 51 as core 15 on socket 1 00:06:20.533 EAL: Detected lcore 52 as core 16 on socket 1 00:06:20.533 EAL: Detected lcore 53 as core 17 on socket 1 00:06:20.533 EAL: Detected lcore 54 as core 18 on socket 1 00:06:20.533 EAL: Detected lcore 55 as core 19 on socket 1 00:06:20.533 EAL: Detected lcore 56 as core 20 on socket 1 00:06:20.533 EAL: Detected lcore 57 as core 21 on socket 1 00:06:20.533 EAL: Detected lcore 58 as core 22 on socket 1 00:06:20.533 EAL: Detected lcore 59 as core 23 on socket 1 00:06:20.533 EAL: Detected lcore 60 as core 24 on socket 1 00:06:20.533 EAL: Detected lcore 61 as core 25 on socket 1 00:06:20.533 EAL: Detected lcore 62 as core 26 on socket 1 00:06:20.533 EAL: Detected lcore 63 as core 27 on socket 1 00:06:20.533 EAL: Detected lcore 64 as core 28 on socket 1 00:06:20.533 EAL: Detected lcore 65 as core 29 on socket 1 00:06:20.533 EAL: Detected lcore 66 as core 30 on socket 1 00:06:20.533 EAL: Detected lcore 67 as core 31 on socket 1 00:06:20.533 EAL: Detected lcore 68 as core 32 on socket 1 00:06:20.533 EAL: Detected lcore 69 as core 33 on socket 1 00:06:20.533 EAL: Detected lcore 70 as core 34 on socket 1 00:06:20.533 EAL: Detected lcore 71 as core 35 on socket 1 00:06:20.533 EAL: Detected lcore 72 as core 0 on socket 0 00:06:20.533 EAL: Detected lcore 73 as core 1 on socket 0 00:06:20.533 EAL: Detected lcore 74 as core 2 on socket 0 00:06:20.533 EAL: Detected lcore 75 as core 3 on socket 0 00:06:20.533 EAL: Detected lcore 76 as core 4 on socket 0 00:06:20.533 EAL: Detected lcore 77 as core 5 on socket 0 00:06:20.533 EAL: Detected lcore 78 as core 6 on socket 0 00:06:20.533 EAL: Detected lcore 79 as core 7 on socket 0 00:06:20.533 EAL: Detected lcore 80 as core 8 on socket 0 00:06:20.533 EAL: Detected lcore 81 as core 9 on socket 0 00:06:20.533 EAL: Detected lcore 82 as core 10 on socket 0 00:06:20.533 EAL: Detected lcore 83 as core 11 on socket 0 00:06:20.533 EAL: Detected lcore 84 as core 12 on socket 0 00:06:20.533 EAL: Detected lcore 85 as core 13 on socket 0 00:06:20.533 EAL: Detected lcore 86 as core 14 on socket 0 00:06:20.533 EAL: Detected lcore 87 as core 15 on socket 0 00:06:20.533 EAL: Detected lcore 88 as core 16 on socket 0 00:06:20.533 EAL: Detected lcore 89 as core 17 on socket 0 00:06:20.533 EAL: Detected lcore 90 as core 18 on socket 0 00:06:20.533 EAL: Detected lcore 91 as core 19 on socket 0 00:06:20.533 EAL: Detected lcore 92 as core 20 on socket 0 00:06:20.533 EAL: Detected lcore 93 as core 21 on socket 0 00:06:20.533 EAL: Detected lcore 94 as core 22 on socket 0 00:06:20.533 EAL: Detected lcore 95 as core 23 on socket 0 00:06:20.533 EAL: Detected lcore 96 as core 24 on socket 0 00:06:20.533 EAL: Detected lcore 97 as core 25 on socket 0 00:06:20.533 EAL: Detected lcore 98 as core 26 on socket 0 00:06:20.533 EAL: Detected lcore 99 as core 27 on socket 0 00:06:20.533 EAL: Detected lcore 100 as core 28 on socket 0 00:06:20.533 EAL: Detected lcore 101 as core 29 on socket 0 00:06:20.533 EAL: Detected lcore 102 as core 30 on socket 0 00:06:20.533 EAL: Detected lcore 103 as core 31 on socket 0 00:06:20.533 EAL: Detected lcore 104 as core 32 on socket 0 00:06:20.533 EAL: Detected lcore 105 as core 33 on socket 0 00:06:20.533 EAL: Detected lcore 106 as core 34 on socket 0 00:06:20.533 EAL: Detected lcore 107 as core 35 on socket 0 00:06:20.533 EAL: Detected lcore 108 as core 0 on socket 1 00:06:20.533 EAL: Detected lcore 109 as core 1 on socket 1 00:06:20.533 EAL: Detected lcore 110 as core 2 on socket 1 00:06:20.533 EAL: Detected lcore 111 as core 3 on socket 1 00:06:20.533 EAL: Detected lcore 112 as core 4 on socket 1 00:06:20.533 EAL: Detected lcore 113 as core 5 on socket 1 00:06:20.533 EAL: Detected lcore 114 as core 6 on socket 1 00:06:20.533 EAL: Detected lcore 115 as core 7 on socket 1 00:06:20.533 EAL: Detected lcore 116 as core 8 on socket 1 00:06:20.533 EAL: Detected lcore 117 as core 9 on socket 1 00:06:20.533 EAL: Detected lcore 118 as core 10 on socket 1 00:06:20.533 EAL: Detected lcore 119 as core 11 on socket 1 00:06:20.533 EAL: Detected lcore 120 as core 12 on socket 1 00:06:20.533 EAL: Detected lcore 121 as core 13 on socket 1 00:06:20.533 EAL: Detected lcore 122 as core 14 on socket 1 00:06:20.533 EAL: Detected lcore 123 as core 15 on socket 1 00:06:20.533 EAL: Detected lcore 124 as core 16 on socket 1 00:06:20.533 EAL: Detected lcore 125 as core 17 on socket 1 00:06:20.533 EAL: Detected lcore 126 as core 18 on socket 1 00:06:20.533 EAL: Detected lcore 127 as core 19 on socket 1 00:06:20.533 EAL: Skipped lcore 128 as core 20 on socket 1 00:06:20.533 EAL: Skipped lcore 129 as core 21 on socket 1 00:06:20.533 EAL: Skipped lcore 130 as core 22 on socket 1 00:06:20.533 EAL: Skipped lcore 131 as core 23 on socket 1 00:06:20.533 EAL: Skipped lcore 132 as core 24 on socket 1 00:06:20.533 EAL: Skipped lcore 133 as core 25 on socket 1 00:06:20.533 EAL: Skipped lcore 134 as core 26 on socket 1 00:06:20.533 EAL: Skipped lcore 135 as core 27 on socket 1 00:06:20.533 EAL: Skipped lcore 136 as core 28 on socket 1 00:06:20.533 EAL: Skipped lcore 137 as core 29 on socket 1 00:06:20.533 EAL: Skipped lcore 138 as core 30 on socket 1 00:06:20.533 EAL: Skipped lcore 139 as core 31 on socket 1 00:06:20.533 EAL: Skipped lcore 140 as core 32 on socket 1 00:06:20.533 EAL: Skipped lcore 141 as core 33 on socket 1 00:06:20.533 EAL: Skipped lcore 142 as core 34 on socket 1 00:06:20.533 EAL: Skipped lcore 143 as core 35 on socket 1 00:06:20.533 EAL: Maximum logical cores by configuration: 128 00:06:20.533 EAL: Detected CPU lcores: 128 00:06:20.533 EAL: Detected NUMA nodes: 2 00:06:20.533 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:20.533 EAL: Detected shared linkage of DPDK 00:06:20.533 EAL: No shared files mode enabled, IPC will be disabled 00:06:20.533 EAL: Bus pci wants IOVA as 'DC' 00:06:20.533 EAL: Buses did not request a specific IOVA mode. 00:06:20.533 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:20.533 EAL: Selected IOVA mode 'VA' 00:06:20.533 EAL: Probing VFIO support... 00:06:20.533 EAL: IOMMU type 1 (Type 1) is supported 00:06:20.534 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:20.534 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:20.534 EAL: VFIO support initialized 00:06:20.534 EAL: Ask a virtual area of 0x2e000 bytes 00:06:20.534 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:20.534 EAL: Setting up physically contiguous memory... 00:06:20.534 EAL: Setting maximum number of open files to 524288 00:06:20.534 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:20.534 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:20.534 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:20.534 EAL: Ask a virtual area of 0x61000 bytes 00:06:20.534 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:20.534 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:20.534 EAL: Ask a virtual area of 0x400000000 bytes 00:06:20.534 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:20.534 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:20.534 EAL: Ask a virtual area of 0x61000 bytes 00:06:20.534 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:20.534 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:20.534 EAL: Ask a virtual area of 0x400000000 bytes 00:06:20.534 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:20.534 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:20.534 EAL: Ask a virtual area of 0x61000 bytes 00:06:20.534 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:20.534 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:20.534 EAL: Ask a virtual area of 0x400000000 bytes 00:06:20.534 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:20.534 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:20.534 EAL: Ask a virtual area of 0x61000 bytes 00:06:20.534 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:20.534 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:20.534 EAL: Ask a virtual area of 0x400000000 bytes 00:06:20.534 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:20.534 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:20.534 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:20.534 EAL: Ask a virtual area of 0x61000 bytes 00:06:20.534 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:20.534 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:20.534 EAL: Ask a virtual area of 0x400000000 bytes 00:06:20.534 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:20.534 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:20.534 EAL: Ask a virtual area of 0x61000 bytes 00:06:20.534 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:20.534 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:20.534 EAL: Ask a virtual area of 0x400000000 bytes 00:06:20.534 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:20.534 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:20.534 EAL: Ask a virtual area of 0x61000 bytes 00:06:20.534 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:20.534 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:20.534 EAL: Ask a virtual area of 0x400000000 bytes 00:06:20.534 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:20.534 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:20.534 EAL: Ask a virtual area of 0x61000 bytes 00:06:20.534 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:20.534 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:20.534 EAL: Ask a virtual area of 0x400000000 bytes 00:06:20.534 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:20.534 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:20.534 EAL: Hugepages will be freed exactly as allocated. 00:06:20.534 EAL: No shared files mode enabled, IPC is disabled 00:06:20.534 EAL: No shared files mode enabled, IPC is disabled 00:06:20.534 EAL: TSC frequency is ~2400000 KHz 00:06:20.534 EAL: Main lcore 0 is ready (tid=7fb1a57c2a40;cpuset=[0]) 00:06:20.534 EAL: Trying to obtain current memory policy. 00:06:20.534 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.534 EAL: Restoring previous memory policy: 0 00:06:20.534 EAL: request: mp_malloc_sync 00:06:20.534 EAL: No shared files mode enabled, IPC is disabled 00:06:20.534 EAL: Heap on socket 0 was expanded by 2MB 00:06:20.534 EAL: No shared files mode enabled, IPC is disabled 00:06:20.534 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:20.534 EAL: Mem event callback 'spdk:(nil)' registered 00:06:20.534 00:06:20.534 00:06:20.534 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.534 http://cunit.sourceforge.net/ 00:06:20.534 00:06:20.534 00:06:20.534 Suite: components_suite 00:06:20.795 Test: vtophys_malloc_test ...passed 00:06:20.795 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:20.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.795 EAL: Restoring previous memory policy: 4 00:06:20.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.795 EAL: request: mp_malloc_sync 00:06:20.795 EAL: No shared files mode enabled, IPC is disabled 00:06:20.795 EAL: Heap on socket 0 was expanded by 4MB 00:06:20.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.795 EAL: request: mp_malloc_sync 00:06:20.795 EAL: No shared files mode enabled, IPC is disabled 00:06:20.795 EAL: Heap on socket 0 was shrunk by 4MB 00:06:20.795 EAL: Trying to obtain current memory policy. 00:06:20.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.795 EAL: Restoring previous memory policy: 4 00:06:20.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.795 EAL: request: mp_malloc_sync 00:06:20.795 EAL: No shared files mode enabled, IPC is disabled 00:06:20.795 EAL: Heap on socket 0 was expanded by 6MB 00:06:20.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.795 EAL: request: mp_malloc_sync 00:06:20.795 EAL: No shared files mode enabled, IPC is disabled 00:06:20.795 EAL: Heap on socket 0 was shrunk by 6MB 00:06:20.795 EAL: Trying to obtain current memory policy. 00:06:20.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.795 EAL: Restoring previous memory policy: 4 00:06:20.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.795 EAL: request: mp_malloc_sync 00:06:20.795 EAL: No shared files mode enabled, IPC is disabled 00:06:20.795 EAL: Heap on socket 0 was expanded by 10MB 00:06:20.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.795 EAL: request: mp_malloc_sync 00:06:20.795 EAL: No shared files mode enabled, IPC is disabled 00:06:20.795 EAL: Heap on socket 0 was shrunk by 10MB 00:06:20.795 EAL: Trying to obtain current memory policy. 00:06:20.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.795 EAL: Restoring previous memory policy: 4 00:06:20.795 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.795 EAL: request: mp_malloc_sync 00:06:20.795 EAL: No shared files mode enabled, IPC is disabled 00:06:20.795 EAL: Heap on socket 0 was expanded by 18MB 00:06:21.055 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.055 EAL: request: mp_malloc_sync 00:06:21.055 EAL: No shared files mode enabled, IPC is disabled 00:06:21.055 EAL: Heap on socket 0 was shrunk by 18MB 00:06:21.055 EAL: Trying to obtain current memory policy. 00:06:21.055 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:21.055 EAL: Restoring previous memory policy: 4 00:06:21.055 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.055 EAL: request: mp_malloc_sync 00:06:21.055 EAL: No shared files mode enabled, IPC is disabled 00:06:21.055 EAL: Heap on socket 0 was expanded by 34MB 00:06:21.055 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.055 EAL: request: mp_malloc_sync 00:06:21.055 EAL: No shared files mode enabled, IPC is disabled 00:06:21.055 EAL: Heap on socket 0 was shrunk by 34MB 00:06:21.055 EAL: Trying to obtain current memory policy. 00:06:21.055 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:21.055 EAL: Restoring previous memory policy: 4 00:06:21.055 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.055 EAL: request: mp_malloc_sync 00:06:21.055 EAL: No shared files mode enabled, IPC is disabled 00:06:21.055 EAL: Heap on socket 0 was expanded by 66MB 00:06:21.055 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.055 EAL: request: mp_malloc_sync 00:06:21.055 EAL: No shared files mode enabled, IPC is disabled 00:06:21.055 EAL: Heap on socket 0 was shrunk by 66MB 00:06:21.316 EAL: Trying to obtain current memory policy. 00:06:21.316 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:21.316 EAL: Restoring previous memory policy: 4 00:06:21.316 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.316 EAL: request: mp_malloc_sync 00:06:21.316 EAL: No shared files mode enabled, IPC is disabled 00:06:21.316 EAL: Heap on socket 0 was expanded by 130MB 00:06:21.316 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.316 EAL: request: mp_malloc_sync 00:06:21.316 EAL: No shared files mode enabled, IPC is disabled 00:06:21.316 EAL: Heap on socket 0 was shrunk by 130MB 00:06:21.576 EAL: Trying to obtain current memory policy. 00:06:21.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:21.576 EAL: Restoring previous memory policy: 4 00:06:21.576 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.576 EAL: request: mp_malloc_sync 00:06:21.576 EAL: No shared files mode enabled, IPC is disabled 00:06:21.576 EAL: Heap on socket 0 was expanded by 258MB 00:06:21.836 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.836 EAL: request: mp_malloc_sync 00:06:21.836 EAL: No shared files mode enabled, IPC is disabled 00:06:21.836 EAL: Heap on socket 0 was shrunk by 258MB 00:06:22.097 EAL: Trying to obtain current memory policy. 00:06:22.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.357 EAL: Restoring previous memory policy: 4 00:06:22.357 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.357 EAL: request: mp_malloc_sync 00:06:22.357 EAL: No shared files mode enabled, IPC is disabled 00:06:22.357 EAL: Heap on socket 0 was expanded by 514MB 00:06:22.928 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.928 EAL: request: mp_malloc_sync 00:06:22.928 EAL: No shared files mode enabled, IPC is disabled 00:06:22.928 EAL: Heap on socket 0 was shrunk by 514MB 00:06:23.499 EAL: Trying to obtain current memory policy. 00:06:23.499 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.499 EAL: Restoring previous memory policy: 4 00:06:23.499 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.499 EAL: request: mp_malloc_sync 00:06:23.499 EAL: No shared files mode enabled, IPC is disabled 00:06:23.499 EAL: Heap on socket 0 was expanded by 1026MB 00:06:24.880 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.880 EAL: request: mp_malloc_sync 00:06:24.880 EAL: No shared files mode enabled, IPC is disabled 00:06:24.880 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:26.260 passed 00:06:26.260 00:06:26.260 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.260 suites 1 1 n/a 0 0 00:06:26.260 tests 2 2 2 0 0 00:06:26.260 asserts 497 497 497 0 n/a 00:06:26.260 00:06:26.260 Elapsed time = 5.402 seconds 00:06:26.260 EAL: Calling mem event callback 'spdk:(nil)' 00:06:26.260 EAL: request: mp_malloc_sync 00:06:26.260 EAL: No shared files mode enabled, IPC is disabled 00:06:26.260 EAL: Heap on socket 0 was shrunk by 2MB 00:06:26.260 EAL: No shared files mode enabled, IPC is disabled 00:06:26.260 EAL: No shared files mode enabled, IPC is disabled 00:06:26.260 EAL: No shared files mode enabled, IPC is disabled 00:06:26.260 00:06:26.260 real 0m5.648s 00:06:26.260 user 0m4.885s 00:06:26.260 sys 0m0.722s 00:06:26.260 14:16:49 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.260 14:16:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:26.260 ************************************ 00:06:26.260 END TEST env_vtophys 00:06:26.260 ************************************ 00:06:26.260 14:16:49 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:26.260 14:16:49 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.260 14:16:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.260 14:16:49 env -- common/autotest_common.sh@10 -- # set +x 00:06:26.260 ************************************ 00:06:26.260 START TEST env_pci 00:06:26.260 ************************************ 00:06:26.260 14:16:49 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:26.260 00:06:26.260 00:06:26.260 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.260 http://cunit.sourceforge.net/ 00:06:26.260 00:06:26.260 00:06:26.260 Suite: pci 00:06:26.260 Test: pci_hook ...[2024-10-07 14:16:49.744710] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2760526 has claimed it 00:06:26.260 EAL: Cannot find device (10000:00:01.0) 00:06:26.260 EAL: Failed to attach device on primary process 00:06:26.260 passed 00:06:26.260 00:06:26.260 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.260 suites 1 1 n/a 0 0 00:06:26.260 tests 1 1 1 0 0 00:06:26.260 asserts 25 25 25 0 n/a 00:06:26.260 00:06:26.260 Elapsed time = 0.055 seconds 00:06:26.260 00:06:26.260 real 0m0.137s 00:06:26.260 user 0m0.055s 00:06:26.260 sys 0m0.081s 00:06:26.260 14:16:49 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.260 14:16:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:26.260 ************************************ 00:06:26.260 END TEST env_pci 00:06:26.260 ************************************ 00:06:26.260 14:16:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:26.260 14:16:49 env -- env/env.sh@15 -- # uname 00:06:26.260 14:16:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:26.260 14:16:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:26.260 14:16:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:26.260 14:16:49 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:26.260 14:16:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.260 14:16:49 env -- common/autotest_common.sh@10 -- # set +x 00:06:26.260 ************************************ 00:06:26.260 START TEST env_dpdk_post_init 00:06:26.260 ************************************ 00:06:26.260 14:16:49 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:26.520 EAL: Detected CPU lcores: 128 00:06:26.520 EAL: Detected NUMA nodes: 2 00:06:26.520 EAL: Detected shared linkage of DPDK 00:06:26.520 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:26.520 EAL: Selected IOVA mode 'VA' 00:06:26.520 EAL: VFIO support initialized 00:06:26.520 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:26.520 EAL: Using IOMMU type 1 (Type 1) 00:06:26.779 EAL: Ignore mapping IO port bar(1) 00:06:26.779 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:06:27.039 EAL: Ignore mapping IO port bar(1) 00:06:27.039 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:06:27.039 EAL: Ignore mapping IO port bar(1) 00:06:27.298 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:06:27.298 EAL: Ignore mapping IO port bar(1) 00:06:27.558 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:06:27.558 EAL: Ignore mapping IO port bar(1) 00:06:27.818 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:06:27.818 EAL: Ignore mapping IO port bar(1) 00:06:27.818 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:06:28.079 EAL: Ignore mapping IO port bar(1) 00:06:28.079 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:06:28.340 EAL: Ignore mapping IO port bar(1) 00:06:28.340 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:06:28.600 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:06:28.861 EAL: Ignore mapping IO port bar(1) 00:06:28.861 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:06:28.861 EAL: Ignore mapping IO port bar(1) 00:06:29.121 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:06:29.121 EAL: Ignore mapping IO port bar(1) 00:06:29.381 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:06:29.381 EAL: Ignore mapping IO port bar(1) 00:06:29.381 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:06:29.642 EAL: Ignore mapping IO port bar(1) 00:06:29.642 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:06:29.903 EAL: Ignore mapping IO port bar(1) 00:06:29.903 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:06:30.163 EAL: Ignore mapping IO port bar(1) 00:06:30.163 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:06:30.423 EAL: Ignore mapping IO port bar(1) 00:06:30.423 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:06:30.423 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:06:30.423 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:06:30.423 Starting DPDK initialization... 00:06:30.423 Starting SPDK post initialization... 00:06:30.423 SPDK NVMe probe 00:06:30.423 Attaching to 0000:65:00.0 00:06:30.423 Attached to 0000:65:00.0 00:06:30.423 Cleaning up... 00:06:32.336 00:06:32.336 real 0m5.872s 00:06:32.336 user 0m0.171s 00:06:32.336 sys 0m0.246s 00:06:32.336 14:16:55 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.336 14:16:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:32.336 ************************************ 00:06:32.336 END TEST env_dpdk_post_init 00:06:32.336 ************************************ 00:06:32.336 14:16:55 env -- env/env.sh@26 -- # uname 00:06:32.336 14:16:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:32.336 14:16:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:32.336 14:16:55 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.336 14:16:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.336 14:16:55 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.336 ************************************ 00:06:32.336 START TEST env_mem_callbacks 00:06:32.336 ************************************ 00:06:32.336 14:16:55 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:32.336 EAL: Detected CPU lcores: 128 00:06:32.336 EAL: Detected NUMA nodes: 2 00:06:32.336 EAL: Detected shared linkage of DPDK 00:06:32.336 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:32.336 EAL: Selected IOVA mode 'VA' 00:06:32.336 EAL: VFIO support initialized 00:06:32.336 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:32.336 00:06:32.336 00:06:32.336 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.336 http://cunit.sourceforge.net/ 00:06:32.336 00:06:32.336 00:06:32.336 Suite: memory 00:06:32.336 Test: test ... 00:06:32.336 register 0x200000200000 2097152 00:06:32.336 malloc 3145728 00:06:32.336 register 0x200000400000 4194304 00:06:32.336 buf 0x2000004fffc0 len 3145728 PASSED 00:06:32.336 malloc 64 00:06:32.336 buf 0x2000004ffec0 len 64 PASSED 00:06:32.336 malloc 4194304 00:06:32.336 register 0x200000800000 6291456 00:06:32.336 buf 0x2000009fffc0 len 4194304 PASSED 00:06:32.336 free 0x2000004fffc0 3145728 00:06:32.336 free 0x2000004ffec0 64 00:06:32.336 unregister 0x200000400000 4194304 PASSED 00:06:32.336 free 0x2000009fffc0 4194304 00:06:32.336 unregister 0x200000800000 6291456 PASSED 00:06:32.336 malloc 8388608 00:06:32.336 register 0x200000400000 10485760 00:06:32.336 buf 0x2000005fffc0 len 8388608 PASSED 00:06:32.336 free 0x2000005fffc0 8388608 00:06:32.336 unregister 0x200000400000 10485760 PASSED 00:06:32.336 passed 00:06:32.336 00:06:32.336 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.336 suites 1 1 n/a 0 0 00:06:32.336 tests 1 1 1 0 0 00:06:32.336 asserts 15 15 15 0 n/a 00:06:32.336 00:06:32.336 Elapsed time = 0.047 seconds 00:06:32.598 00:06:32.598 real 0m0.167s 00:06:32.598 user 0m0.082s 00:06:32.598 sys 0m0.084s 00:06:32.598 14:16:56 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.598 14:16:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:32.598 ************************************ 00:06:32.598 END TEST env_mem_callbacks 00:06:32.598 ************************************ 00:06:32.598 00:06:32.598 real 0m12.728s 00:06:32.598 user 0m5.740s 00:06:32.598 sys 0m1.524s 00:06:32.598 14:16:56 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.598 14:16:56 env -- common/autotest_common.sh@10 -- # set +x 00:06:32.598 ************************************ 00:06:32.598 END TEST env 00:06:32.598 ************************************ 00:06:32.598 14:16:56 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:32.598 14:16:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.598 14:16:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.598 14:16:56 -- common/autotest_common.sh@10 -- # set +x 00:06:32.598 ************************************ 00:06:32.598 START TEST rpc 00:06:32.598 ************************************ 00:06:32.598 14:16:56 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:32.598 * Looking for test storage... 00:06:32.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:32.598 14:16:56 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:32.598 14:16:56 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:32.598 14:16:56 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:32.860 14:16:56 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:32.860 14:16:56 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.860 14:16:56 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.860 14:16:56 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.860 14:16:56 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.860 14:16:56 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.860 14:16:56 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.860 14:16:56 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.860 14:16:56 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.860 14:16:56 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.860 14:16:56 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.860 14:16:56 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.860 14:16:56 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:32.860 14:16:56 rpc -- scripts/common.sh@345 -- # : 1 00:06:32.860 14:16:56 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.860 14:16:56 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.860 14:16:56 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:32.860 14:16:56 rpc -- scripts/common.sh@353 -- # local d=1 00:06:32.860 14:16:56 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.860 14:16:56 rpc -- scripts/common.sh@355 -- # echo 1 00:06:32.860 14:16:56 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.860 14:16:56 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:32.860 14:16:56 rpc -- scripts/common.sh@353 -- # local d=2 00:06:32.860 14:16:56 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.860 14:16:56 rpc -- scripts/common.sh@355 -- # echo 2 00:06:32.860 14:16:56 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.860 14:16:56 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.860 14:16:56 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.860 14:16:56 rpc -- scripts/common.sh@368 -- # return 0 00:06:32.860 14:16:56 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.860 14:16:56 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:32.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.860 --rc genhtml_branch_coverage=1 00:06:32.860 --rc genhtml_function_coverage=1 00:06:32.860 --rc genhtml_legend=1 00:06:32.860 --rc geninfo_all_blocks=1 00:06:32.860 --rc geninfo_unexecuted_blocks=1 00:06:32.860 00:06:32.860 ' 00:06:32.860 14:16:56 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:32.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.860 --rc genhtml_branch_coverage=1 00:06:32.860 --rc genhtml_function_coverage=1 00:06:32.860 --rc genhtml_legend=1 00:06:32.860 --rc geninfo_all_blocks=1 00:06:32.860 --rc geninfo_unexecuted_blocks=1 00:06:32.860 00:06:32.860 ' 00:06:32.860 14:16:56 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:32.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.860 --rc genhtml_branch_coverage=1 00:06:32.860 --rc genhtml_function_coverage=1 00:06:32.860 --rc genhtml_legend=1 00:06:32.860 --rc geninfo_all_blocks=1 00:06:32.860 --rc geninfo_unexecuted_blocks=1 00:06:32.860 00:06:32.860 ' 00:06:32.860 14:16:56 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:32.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.860 --rc genhtml_branch_coverage=1 00:06:32.860 --rc genhtml_function_coverage=1 00:06:32.860 --rc genhtml_legend=1 00:06:32.860 --rc geninfo_all_blocks=1 00:06:32.860 --rc geninfo_unexecuted_blocks=1 00:06:32.860 00:06:32.860 ' 00:06:32.860 14:16:56 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:32.860 14:16:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2761954 00:06:32.860 14:16:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.860 14:16:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2761954 00:06:32.860 14:16:56 rpc -- common/autotest_common.sh@831 -- # '[' -z 2761954 ']' 00:06:32.860 14:16:56 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.860 14:16:56 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.860 14:16:56 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.860 14:16:56 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.860 14:16:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.860 [2024-10-07 14:16:56.457011] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:32.860 [2024-10-07 14:16:56.457143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2761954 ] 00:06:33.121 [2024-10-07 14:16:56.592266] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.121 [2024-10-07 14:16:56.771779] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:33.121 [2024-10-07 14:16:56.771829] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2761954' to capture a snapshot of events at runtime. 00:06:33.121 [2024-10-07 14:16:56.771842] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:33.121 [2024-10-07 14:16:56.771853] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:33.121 [2024-10-07 14:16:56.771864] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2761954 for offline analysis/debug. 00:06:33.121 [2024-10-07 14:16:56.773105] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.062 14:16:57 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.062 14:16:57 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:34.062 14:16:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:34.062 14:16:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:34.062 14:16:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:34.062 14:16:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:34.062 14:16:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.062 14:16:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.062 14:16:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.062 ************************************ 00:06:34.062 START TEST rpc_integrity 00:06:34.062 ************************************ 00:06:34.062 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:34.062 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:34.062 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.062 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.062 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.062 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:34.062 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:34.062 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:34.062 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:34.062 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.062 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.062 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.062 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:34.062 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:34.062 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.062 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.062 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.062 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:34.062 { 00:06:34.062 "name": "Malloc0", 00:06:34.062 "aliases": [ 00:06:34.062 "9521d211-2fe1-4aa7-8f0b-fc199d1ff88a" 00:06:34.062 ], 00:06:34.062 "product_name": "Malloc disk", 00:06:34.063 "block_size": 512, 00:06:34.063 "num_blocks": 16384, 00:06:34.063 "uuid": "9521d211-2fe1-4aa7-8f0b-fc199d1ff88a", 00:06:34.063 "assigned_rate_limits": { 00:06:34.063 "rw_ios_per_sec": 0, 00:06:34.063 "rw_mbytes_per_sec": 0, 00:06:34.063 "r_mbytes_per_sec": 0, 00:06:34.063 "w_mbytes_per_sec": 0 00:06:34.063 }, 00:06:34.063 "claimed": false, 00:06:34.063 "zoned": false, 00:06:34.063 "supported_io_types": { 00:06:34.063 "read": true, 00:06:34.063 "write": true, 00:06:34.063 "unmap": true, 00:06:34.063 "flush": true, 00:06:34.063 "reset": true, 00:06:34.063 "nvme_admin": false, 00:06:34.063 "nvme_io": false, 00:06:34.063 "nvme_io_md": false, 00:06:34.063 "write_zeroes": true, 00:06:34.063 "zcopy": true, 00:06:34.063 "get_zone_info": false, 00:06:34.063 "zone_management": false, 00:06:34.063 "zone_append": false, 00:06:34.063 "compare": false, 00:06:34.063 "compare_and_write": false, 00:06:34.063 "abort": true, 00:06:34.063 "seek_hole": false, 00:06:34.063 "seek_data": false, 00:06:34.063 "copy": true, 00:06:34.063 "nvme_iov_md": false 00:06:34.063 }, 00:06:34.063 "memory_domains": [ 00:06:34.063 { 00:06:34.063 "dma_device_id": "system", 00:06:34.063 "dma_device_type": 1 00:06:34.063 }, 00:06:34.063 { 00:06:34.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.063 "dma_device_type": 2 00:06:34.063 } 00:06:34.063 ], 00:06:34.063 "driver_specific": {} 00:06:34.063 } 00:06:34.063 ]' 00:06:34.063 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:34.063 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:34.063 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:34.063 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.063 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.063 [2024-10-07 14:16:57.605020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:34.063 [2024-10-07 14:16:57.605085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:34.063 [2024-10-07 14:16:57.605112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001fe80 00:06:34.063 [2024-10-07 14:16:57.605126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:34.063 [2024-10-07 14:16:57.607416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:34.063 [2024-10-07 14:16:57.607447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:34.063 Passthru0 00:06:34.063 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.063 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:34.063 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.063 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.063 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.063 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:34.063 { 00:06:34.063 "name": "Malloc0", 00:06:34.063 "aliases": [ 00:06:34.063 "9521d211-2fe1-4aa7-8f0b-fc199d1ff88a" 00:06:34.063 ], 00:06:34.063 "product_name": "Malloc disk", 00:06:34.063 "block_size": 512, 00:06:34.063 "num_blocks": 16384, 00:06:34.063 "uuid": "9521d211-2fe1-4aa7-8f0b-fc199d1ff88a", 00:06:34.063 "assigned_rate_limits": { 00:06:34.063 "rw_ios_per_sec": 0, 00:06:34.063 "rw_mbytes_per_sec": 0, 00:06:34.063 "r_mbytes_per_sec": 0, 00:06:34.063 "w_mbytes_per_sec": 0 00:06:34.063 }, 00:06:34.063 "claimed": true, 00:06:34.063 "claim_type": "exclusive_write", 00:06:34.063 "zoned": false, 00:06:34.063 "supported_io_types": { 00:06:34.063 "read": true, 00:06:34.063 "write": true, 00:06:34.063 "unmap": true, 00:06:34.063 "flush": true, 00:06:34.063 "reset": true, 00:06:34.063 "nvme_admin": false, 00:06:34.063 "nvme_io": false, 00:06:34.063 "nvme_io_md": false, 00:06:34.063 "write_zeroes": true, 00:06:34.063 "zcopy": true, 00:06:34.063 "get_zone_info": false, 00:06:34.063 "zone_management": false, 00:06:34.063 "zone_append": false, 00:06:34.063 "compare": false, 00:06:34.063 "compare_and_write": false, 00:06:34.063 "abort": true, 00:06:34.063 "seek_hole": false, 00:06:34.063 "seek_data": false, 00:06:34.063 "copy": true, 00:06:34.063 "nvme_iov_md": false 00:06:34.063 }, 00:06:34.063 "memory_domains": [ 00:06:34.063 { 00:06:34.063 "dma_device_id": "system", 00:06:34.063 "dma_device_type": 1 00:06:34.063 }, 00:06:34.063 { 00:06:34.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.063 "dma_device_type": 2 00:06:34.063 } 00:06:34.063 ], 00:06:34.063 "driver_specific": {} 00:06:34.063 }, 00:06:34.063 { 00:06:34.063 "name": "Passthru0", 00:06:34.063 "aliases": [ 00:06:34.063 "bfb7b929-448c-5494-b6de-6e9e1afa0014" 00:06:34.063 ], 00:06:34.063 "product_name": "passthru", 00:06:34.063 "block_size": 512, 00:06:34.063 "num_blocks": 16384, 00:06:34.063 "uuid": "bfb7b929-448c-5494-b6de-6e9e1afa0014", 00:06:34.063 "assigned_rate_limits": { 00:06:34.063 "rw_ios_per_sec": 0, 00:06:34.063 "rw_mbytes_per_sec": 0, 00:06:34.063 "r_mbytes_per_sec": 0, 00:06:34.063 "w_mbytes_per_sec": 0 00:06:34.063 }, 00:06:34.063 "claimed": false, 00:06:34.063 "zoned": false, 00:06:34.063 "supported_io_types": { 00:06:34.063 "read": true, 00:06:34.063 "write": true, 00:06:34.063 "unmap": true, 00:06:34.063 "flush": true, 00:06:34.063 "reset": true, 00:06:34.063 "nvme_admin": false, 00:06:34.063 "nvme_io": false, 00:06:34.063 "nvme_io_md": false, 00:06:34.063 "write_zeroes": true, 00:06:34.063 "zcopy": true, 00:06:34.063 "get_zone_info": false, 00:06:34.063 "zone_management": false, 00:06:34.063 "zone_append": false, 00:06:34.063 "compare": false, 00:06:34.063 "compare_and_write": false, 00:06:34.063 "abort": true, 00:06:34.063 "seek_hole": false, 00:06:34.063 "seek_data": false, 00:06:34.063 "copy": true, 00:06:34.063 "nvme_iov_md": false 00:06:34.063 }, 00:06:34.063 "memory_domains": [ 00:06:34.063 { 00:06:34.063 "dma_device_id": "system", 00:06:34.063 "dma_device_type": 1 00:06:34.063 }, 00:06:34.063 { 00:06:34.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.063 "dma_device_type": 2 00:06:34.063 } 00:06:34.063 ], 00:06:34.063 "driver_specific": { 00:06:34.063 "passthru": { 00:06:34.063 "name": "Passthru0", 00:06:34.063 "base_bdev_name": "Malloc0" 00:06:34.063 } 00:06:34.063 } 00:06:34.063 } 00:06:34.063 ]' 00:06:34.063 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:34.063 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:34.063 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:34.063 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.063 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.063 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.063 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:34.063 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.063 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.063 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.063 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:34.063 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.063 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.063 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.063 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:34.063 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:34.325 14:16:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:34.325 00:06:34.325 real 0m0.321s 00:06:34.325 user 0m0.204s 00:06:34.325 sys 0m0.033s 00:06:34.325 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.325 14:16:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.325 ************************************ 00:06:34.325 END TEST rpc_integrity 00:06:34.325 ************************************ 00:06:34.325 14:16:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:34.325 14:16:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.325 14:16:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.325 14:16:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.325 ************************************ 00:06:34.325 START TEST rpc_plugins 00:06:34.325 ************************************ 00:06:34.325 14:16:57 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:34.325 14:16:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:34.325 14:16:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.325 14:16:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.325 14:16:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.325 14:16:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:34.325 14:16:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:34.325 14:16:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.325 14:16:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.325 14:16:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.325 14:16:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:34.325 { 00:06:34.325 "name": "Malloc1", 00:06:34.325 "aliases": [ 00:06:34.325 "c2664365-5b82-484f-ad5c-b7bca47ba430" 00:06:34.325 ], 00:06:34.325 "product_name": "Malloc disk", 00:06:34.325 "block_size": 4096, 00:06:34.325 "num_blocks": 256, 00:06:34.325 "uuid": "c2664365-5b82-484f-ad5c-b7bca47ba430", 00:06:34.325 "assigned_rate_limits": { 00:06:34.325 "rw_ios_per_sec": 0, 00:06:34.325 "rw_mbytes_per_sec": 0, 00:06:34.325 "r_mbytes_per_sec": 0, 00:06:34.325 "w_mbytes_per_sec": 0 00:06:34.325 }, 00:06:34.325 "claimed": false, 00:06:34.325 "zoned": false, 00:06:34.325 "supported_io_types": { 00:06:34.325 "read": true, 00:06:34.325 "write": true, 00:06:34.325 "unmap": true, 00:06:34.325 "flush": true, 00:06:34.325 "reset": true, 00:06:34.325 "nvme_admin": false, 00:06:34.325 "nvme_io": false, 00:06:34.325 "nvme_io_md": false, 00:06:34.325 "write_zeroes": true, 00:06:34.325 "zcopy": true, 00:06:34.325 "get_zone_info": false, 00:06:34.325 "zone_management": false, 00:06:34.325 "zone_append": false, 00:06:34.325 "compare": false, 00:06:34.325 "compare_and_write": false, 00:06:34.325 "abort": true, 00:06:34.325 "seek_hole": false, 00:06:34.325 "seek_data": false, 00:06:34.325 "copy": true, 00:06:34.325 "nvme_iov_md": false 00:06:34.325 }, 00:06:34.325 "memory_domains": [ 00:06:34.325 { 00:06:34.325 "dma_device_id": "system", 00:06:34.325 "dma_device_type": 1 00:06:34.325 }, 00:06:34.325 { 00:06:34.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.325 "dma_device_type": 2 00:06:34.325 } 00:06:34.325 ], 00:06:34.325 "driver_specific": {} 00:06:34.325 } 00:06:34.325 ]' 00:06:34.325 14:16:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:34.325 14:16:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:34.325 14:16:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:34.325 14:16:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.325 14:16:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.325 14:16:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.325 14:16:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:34.325 14:16:57 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.325 14:16:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.325 14:16:57 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.325 14:16:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:34.325 14:16:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:34.325 14:16:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:34.325 00:06:34.325 real 0m0.153s 00:06:34.325 user 0m0.090s 00:06:34.325 sys 0m0.023s 00:06:34.325 14:16:58 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.325 14:16:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:34.325 ************************************ 00:06:34.325 END TEST rpc_plugins 00:06:34.325 ************************************ 00:06:34.590 14:16:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:34.590 14:16:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.590 14:16:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.590 14:16:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.590 ************************************ 00:06:34.590 START TEST rpc_trace_cmd_test 00:06:34.590 ************************************ 00:06:34.590 14:16:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:34.590 14:16:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:34.590 14:16:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:34.590 14:16:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.590 14:16:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.590 14:16:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.590 14:16:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:34.590 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2761954", 00:06:34.590 "tpoint_group_mask": "0x8", 00:06:34.590 "iscsi_conn": { 00:06:34.590 "mask": "0x2", 00:06:34.590 "tpoint_mask": "0x0" 00:06:34.590 }, 00:06:34.590 "scsi": { 00:06:34.590 "mask": "0x4", 00:06:34.590 "tpoint_mask": "0x0" 00:06:34.590 }, 00:06:34.590 "bdev": { 00:06:34.590 "mask": "0x8", 00:06:34.590 "tpoint_mask": "0xffffffffffffffff" 00:06:34.590 }, 00:06:34.590 "nvmf_rdma": { 00:06:34.590 "mask": "0x10", 00:06:34.590 "tpoint_mask": "0x0" 00:06:34.590 }, 00:06:34.590 "nvmf_tcp": { 00:06:34.590 "mask": "0x20", 00:06:34.590 "tpoint_mask": "0x0" 00:06:34.590 }, 00:06:34.590 "ftl": { 00:06:34.590 "mask": "0x40", 00:06:34.590 "tpoint_mask": "0x0" 00:06:34.590 }, 00:06:34.590 "blobfs": { 00:06:34.590 "mask": "0x80", 00:06:34.590 "tpoint_mask": "0x0" 00:06:34.590 }, 00:06:34.590 "dsa": { 00:06:34.590 "mask": "0x200", 00:06:34.590 "tpoint_mask": "0x0" 00:06:34.590 }, 00:06:34.590 "thread": { 00:06:34.590 "mask": "0x400", 00:06:34.590 "tpoint_mask": "0x0" 00:06:34.590 }, 00:06:34.590 "nvme_pcie": { 00:06:34.590 "mask": "0x800", 00:06:34.590 "tpoint_mask": "0x0" 00:06:34.590 }, 00:06:34.590 "iaa": { 00:06:34.590 "mask": "0x1000", 00:06:34.590 "tpoint_mask": "0x0" 00:06:34.590 }, 00:06:34.590 "nvme_tcp": { 00:06:34.590 "mask": "0x2000", 00:06:34.590 "tpoint_mask": "0x0" 00:06:34.590 }, 00:06:34.590 "bdev_nvme": { 00:06:34.590 "mask": "0x4000", 00:06:34.590 "tpoint_mask": "0x0" 00:06:34.590 }, 00:06:34.590 "sock": { 00:06:34.590 "mask": "0x8000", 00:06:34.590 "tpoint_mask": "0x0" 00:06:34.590 }, 00:06:34.591 "blob": { 00:06:34.591 "mask": "0x10000", 00:06:34.591 "tpoint_mask": "0x0" 00:06:34.591 }, 00:06:34.591 "bdev_raid": { 00:06:34.591 "mask": "0x20000", 00:06:34.591 "tpoint_mask": "0x0" 00:06:34.591 }, 00:06:34.591 "scheduler": { 00:06:34.591 "mask": "0x40000", 00:06:34.591 "tpoint_mask": "0x0" 00:06:34.591 } 00:06:34.591 }' 00:06:34.591 14:16:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:34.591 14:16:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:34.591 14:16:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:34.591 14:16:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:34.591 14:16:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:34.591 14:16:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:34.591 14:16:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:34.591 14:16:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:34.591 14:16:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:34.851 14:16:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:34.851 00:06:34.851 real 0m0.253s 00:06:34.851 user 0m0.205s 00:06:34.851 sys 0m0.038s 00:06:34.851 14:16:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.851 14:16:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.851 ************************************ 00:06:34.851 END TEST rpc_trace_cmd_test 00:06:34.851 ************************************ 00:06:34.851 14:16:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:34.851 14:16:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:34.851 14:16:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:34.851 14:16:58 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.851 14:16:58 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.851 14:16:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.851 ************************************ 00:06:34.851 START TEST rpc_daemon_integrity 00:06:34.851 ************************************ 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:34.851 { 00:06:34.851 "name": "Malloc2", 00:06:34.851 "aliases": [ 00:06:34.851 "979776ce-2d16-44eb-8bce-efc16cbc6295" 00:06:34.851 ], 00:06:34.851 "product_name": "Malloc disk", 00:06:34.851 "block_size": 512, 00:06:34.851 "num_blocks": 16384, 00:06:34.851 "uuid": "979776ce-2d16-44eb-8bce-efc16cbc6295", 00:06:34.851 "assigned_rate_limits": { 00:06:34.851 "rw_ios_per_sec": 0, 00:06:34.851 "rw_mbytes_per_sec": 0, 00:06:34.851 "r_mbytes_per_sec": 0, 00:06:34.851 "w_mbytes_per_sec": 0 00:06:34.851 }, 00:06:34.851 "claimed": false, 00:06:34.851 "zoned": false, 00:06:34.851 "supported_io_types": { 00:06:34.851 "read": true, 00:06:34.851 "write": true, 00:06:34.851 "unmap": true, 00:06:34.851 "flush": true, 00:06:34.851 "reset": true, 00:06:34.851 "nvme_admin": false, 00:06:34.851 "nvme_io": false, 00:06:34.851 "nvme_io_md": false, 00:06:34.851 "write_zeroes": true, 00:06:34.851 "zcopy": true, 00:06:34.851 "get_zone_info": false, 00:06:34.851 "zone_management": false, 00:06:34.851 "zone_append": false, 00:06:34.851 "compare": false, 00:06:34.851 "compare_and_write": false, 00:06:34.851 "abort": true, 00:06:34.851 "seek_hole": false, 00:06:34.851 "seek_data": false, 00:06:34.851 "copy": true, 00:06:34.851 "nvme_iov_md": false 00:06:34.851 }, 00:06:34.851 "memory_domains": [ 00:06:34.851 { 00:06:34.851 "dma_device_id": "system", 00:06:34.851 "dma_device_type": 1 00:06:34.851 }, 00:06:34.851 { 00:06:34.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.851 "dma_device_type": 2 00:06:34.851 } 00:06:34.851 ], 00:06:34.851 "driver_specific": {} 00:06:34.851 } 00:06:34.851 ]' 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.851 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.112 [2024-10-07 14:16:58.563611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:35.112 [2024-10-07 14:16:58.563660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:35.112 [2024-10-07 14:16:58.563684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021080 00:06:35.112 [2024-10-07 14:16:58.563695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:35.112 [2024-10-07 14:16:58.565873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:35.112 [2024-10-07 14:16:58.565900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:35.112 Passthru0 00:06:35.112 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:35.113 { 00:06:35.113 "name": "Malloc2", 00:06:35.113 "aliases": [ 00:06:35.113 "979776ce-2d16-44eb-8bce-efc16cbc6295" 00:06:35.113 ], 00:06:35.113 "product_name": "Malloc disk", 00:06:35.113 "block_size": 512, 00:06:35.113 "num_blocks": 16384, 00:06:35.113 "uuid": "979776ce-2d16-44eb-8bce-efc16cbc6295", 00:06:35.113 "assigned_rate_limits": { 00:06:35.113 "rw_ios_per_sec": 0, 00:06:35.113 "rw_mbytes_per_sec": 0, 00:06:35.113 "r_mbytes_per_sec": 0, 00:06:35.113 "w_mbytes_per_sec": 0 00:06:35.113 }, 00:06:35.113 "claimed": true, 00:06:35.113 "claim_type": "exclusive_write", 00:06:35.113 "zoned": false, 00:06:35.113 "supported_io_types": { 00:06:35.113 "read": true, 00:06:35.113 "write": true, 00:06:35.113 "unmap": true, 00:06:35.113 "flush": true, 00:06:35.113 "reset": true, 00:06:35.113 "nvme_admin": false, 00:06:35.113 "nvme_io": false, 00:06:35.113 "nvme_io_md": false, 00:06:35.113 "write_zeroes": true, 00:06:35.113 "zcopy": true, 00:06:35.113 "get_zone_info": false, 00:06:35.113 "zone_management": false, 00:06:35.113 "zone_append": false, 00:06:35.113 "compare": false, 00:06:35.113 "compare_and_write": false, 00:06:35.113 "abort": true, 00:06:35.113 "seek_hole": false, 00:06:35.113 "seek_data": false, 00:06:35.113 "copy": true, 00:06:35.113 "nvme_iov_md": false 00:06:35.113 }, 00:06:35.113 "memory_domains": [ 00:06:35.113 { 00:06:35.113 "dma_device_id": "system", 00:06:35.113 "dma_device_type": 1 00:06:35.113 }, 00:06:35.113 { 00:06:35.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.113 "dma_device_type": 2 00:06:35.113 } 00:06:35.113 ], 00:06:35.113 "driver_specific": {} 00:06:35.113 }, 00:06:35.113 { 00:06:35.113 "name": "Passthru0", 00:06:35.113 "aliases": [ 00:06:35.113 "75ff57a4-4ad5-50fc-a009-662417e6e94a" 00:06:35.113 ], 00:06:35.113 "product_name": "passthru", 00:06:35.113 "block_size": 512, 00:06:35.113 "num_blocks": 16384, 00:06:35.113 "uuid": "75ff57a4-4ad5-50fc-a009-662417e6e94a", 00:06:35.113 "assigned_rate_limits": { 00:06:35.113 "rw_ios_per_sec": 0, 00:06:35.113 "rw_mbytes_per_sec": 0, 00:06:35.113 "r_mbytes_per_sec": 0, 00:06:35.113 "w_mbytes_per_sec": 0 00:06:35.113 }, 00:06:35.113 "claimed": false, 00:06:35.113 "zoned": false, 00:06:35.113 "supported_io_types": { 00:06:35.113 "read": true, 00:06:35.113 "write": true, 00:06:35.113 "unmap": true, 00:06:35.113 "flush": true, 00:06:35.113 "reset": true, 00:06:35.113 "nvme_admin": false, 00:06:35.113 "nvme_io": false, 00:06:35.113 "nvme_io_md": false, 00:06:35.113 "write_zeroes": true, 00:06:35.113 "zcopy": true, 00:06:35.113 "get_zone_info": false, 00:06:35.113 "zone_management": false, 00:06:35.113 "zone_append": false, 00:06:35.113 "compare": false, 00:06:35.113 "compare_and_write": false, 00:06:35.113 "abort": true, 00:06:35.113 "seek_hole": false, 00:06:35.113 "seek_data": false, 00:06:35.113 "copy": true, 00:06:35.113 "nvme_iov_md": false 00:06:35.113 }, 00:06:35.113 "memory_domains": [ 00:06:35.113 { 00:06:35.113 "dma_device_id": "system", 00:06:35.113 "dma_device_type": 1 00:06:35.113 }, 00:06:35.113 { 00:06:35.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.113 "dma_device_type": 2 00:06:35.113 } 00:06:35.113 ], 00:06:35.113 "driver_specific": { 00:06:35.113 "passthru": { 00:06:35.113 "name": "Passthru0", 00:06:35.113 "base_bdev_name": "Malloc2" 00:06:35.113 } 00:06:35.113 } 00:06:35.113 } 00:06:35.113 ]' 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:35.113 00:06:35.113 real 0m0.324s 00:06:35.113 user 0m0.193s 00:06:35.113 sys 0m0.043s 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.113 14:16:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:35.113 ************************************ 00:06:35.113 END TEST rpc_daemon_integrity 00:06:35.113 ************************************ 00:06:35.113 14:16:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:35.113 14:16:58 rpc -- rpc/rpc.sh@84 -- # killprocess 2761954 00:06:35.113 14:16:58 rpc -- common/autotest_common.sh@950 -- # '[' -z 2761954 ']' 00:06:35.113 14:16:58 rpc -- common/autotest_common.sh@954 -- # kill -0 2761954 00:06:35.113 14:16:58 rpc -- common/autotest_common.sh@955 -- # uname 00:06:35.113 14:16:58 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.113 14:16:58 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2761954 00:06:35.373 14:16:58 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.373 14:16:58 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.373 14:16:58 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2761954' 00:06:35.373 killing process with pid 2761954 00:06:35.373 14:16:58 rpc -- common/autotest_common.sh@969 -- # kill 2761954 00:06:35.373 14:16:58 rpc -- common/autotest_common.sh@974 -- # wait 2761954 00:06:37.286 00:06:37.286 real 0m4.385s 00:06:37.286 user 0m5.002s 00:06:37.286 sys 0m0.933s 00:06:37.286 14:17:00 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.286 14:17:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.286 ************************************ 00:06:37.286 END TEST rpc 00:06:37.286 ************************************ 00:06:37.286 14:17:00 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:37.286 14:17:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.286 14:17:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.286 14:17:00 -- common/autotest_common.sh@10 -- # set +x 00:06:37.286 ************************************ 00:06:37.286 START TEST skip_rpc 00:06:37.286 ************************************ 00:06:37.286 14:17:00 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:37.286 * Looking for test storage... 00:06:37.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:37.286 14:17:00 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:37.286 14:17:00 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:37.286 14:17:00 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:37.286 14:17:00 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:37.286 14:17:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.286 14:17:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.286 14:17:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.286 14:17:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.286 14:17:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.286 14:17:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.286 14:17:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.286 14:17:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.286 14:17:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.286 14:17:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.286 14:17:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.286 14:17:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:37.286 14:17:00 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:37.286 14:17:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.287 14:17:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.287 14:17:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:37.287 14:17:00 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:37.287 14:17:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.287 14:17:00 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:37.287 14:17:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.287 14:17:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:37.287 14:17:00 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:37.287 14:17:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.287 14:17:00 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:37.287 14:17:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.287 14:17:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.287 14:17:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.287 14:17:00 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:37.287 14:17:00 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.287 14:17:00 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:37.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.287 --rc genhtml_branch_coverage=1 00:06:37.287 --rc genhtml_function_coverage=1 00:06:37.287 --rc genhtml_legend=1 00:06:37.287 --rc geninfo_all_blocks=1 00:06:37.287 --rc geninfo_unexecuted_blocks=1 00:06:37.287 00:06:37.287 ' 00:06:37.287 14:17:00 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:37.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.287 --rc genhtml_branch_coverage=1 00:06:37.287 --rc genhtml_function_coverage=1 00:06:37.287 --rc genhtml_legend=1 00:06:37.287 --rc geninfo_all_blocks=1 00:06:37.287 --rc geninfo_unexecuted_blocks=1 00:06:37.287 00:06:37.287 ' 00:06:37.287 14:17:00 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:37.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.287 --rc genhtml_branch_coverage=1 00:06:37.287 --rc genhtml_function_coverage=1 00:06:37.287 --rc genhtml_legend=1 00:06:37.287 --rc geninfo_all_blocks=1 00:06:37.287 --rc geninfo_unexecuted_blocks=1 00:06:37.287 00:06:37.287 ' 00:06:37.287 14:17:00 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:37.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.287 --rc genhtml_branch_coverage=1 00:06:37.287 --rc genhtml_function_coverage=1 00:06:37.287 --rc genhtml_legend=1 00:06:37.287 --rc geninfo_all_blocks=1 00:06:37.287 --rc geninfo_unexecuted_blocks=1 00:06:37.287 00:06:37.287 ' 00:06:37.287 14:17:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:37.287 14:17:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:37.287 14:17:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:37.287 14:17:00 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.287 14:17:00 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.287 14:17:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.287 ************************************ 00:06:37.287 START TEST skip_rpc 00:06:37.287 ************************************ 00:06:37.287 14:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:37.287 14:17:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2763125 00:06:37.287 14:17:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:37.287 14:17:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:37.287 14:17:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:37.287 [2024-10-07 14:17:00.972979] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:37.287 [2024-10-07 14:17:00.973111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2763125 ] 00:06:37.549 [2024-10-07 14:17:01.103285] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.809 [2024-10-07 14:17:01.284660] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2763125 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2763125 ']' 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2763125 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2763125 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2763125' 00:06:43.092 killing process with pid 2763125 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2763125 00:06:43.092 14:17:05 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2763125 00:06:44.034 00:06:44.034 real 0m6.792s 00:06:44.034 user 0m6.441s 00:06:44.034 sys 0m0.391s 00:06:44.034 14:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.034 14:17:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.034 ************************************ 00:06:44.034 END TEST skip_rpc 00:06:44.034 ************************************ 00:06:44.034 14:17:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:44.034 14:17:07 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.034 14:17:07 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.034 14:17:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.034 ************************************ 00:06:44.034 START TEST skip_rpc_with_json 00:06:44.034 ************************************ 00:06:44.034 14:17:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:44.034 14:17:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:44.034 14:17:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2764504 00:06:44.034 14:17:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:44.034 14:17:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2764504 00:06:44.034 14:17:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.034 14:17:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2764504 ']' 00:06:44.034 14:17:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.034 14:17:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.034 14:17:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.034 14:17:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.034 14:17:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:44.296 [2024-10-07 14:17:07.817542] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:44.296 [2024-10-07 14:17:07.817661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2764504 ] 00:06:44.296 [2024-10-07 14:17:07.945472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.557 [2024-10-07 14:17:08.130059] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.132 14:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.132 14:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:45.132 14:17:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:45.132 14:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.132 14:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:45.132 [2024-10-07 14:17:08.782191] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:45.132 request: 00:06:45.132 { 00:06:45.132 "trtype": "tcp", 00:06:45.132 "method": "nvmf_get_transports", 00:06:45.132 "req_id": 1 00:06:45.132 } 00:06:45.132 Got JSON-RPC error response 00:06:45.132 response: 00:06:45.132 { 00:06:45.132 "code": -19, 00:06:45.132 "message": "No such device" 00:06:45.132 } 00:06:45.132 14:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:45.132 14:17:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:45.132 14:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.132 14:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:45.132 [2024-10-07 14:17:08.794393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.132 14:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.132 14:17:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:45.132 14:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.132 14:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:45.393 14:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.393 14:17:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:45.393 { 00:06:45.393 "subsystems": [ 00:06:45.393 { 00:06:45.393 "subsystem": "fsdev", 00:06:45.393 "config": [ 00:06:45.393 { 00:06:45.393 "method": "fsdev_set_opts", 00:06:45.393 "params": { 00:06:45.393 "fsdev_io_pool_size": 65535, 00:06:45.393 "fsdev_io_cache_size": 256 00:06:45.393 } 00:06:45.393 } 00:06:45.393 ] 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "subsystem": "keyring", 00:06:45.393 "config": [] 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "subsystem": "iobuf", 00:06:45.393 "config": [ 00:06:45.393 { 00:06:45.393 "method": "iobuf_set_options", 00:06:45.393 "params": { 00:06:45.393 "small_pool_count": 8192, 00:06:45.393 "large_pool_count": 1024, 00:06:45.393 "small_bufsize": 8192, 00:06:45.393 "large_bufsize": 135168 00:06:45.393 } 00:06:45.393 } 00:06:45.393 ] 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "subsystem": "sock", 00:06:45.393 "config": [ 00:06:45.393 { 00:06:45.393 "method": "sock_set_default_impl", 00:06:45.393 "params": { 00:06:45.393 "impl_name": "posix" 00:06:45.393 } 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "method": "sock_impl_set_options", 00:06:45.393 "params": { 00:06:45.393 "impl_name": "ssl", 00:06:45.393 "recv_buf_size": 4096, 00:06:45.393 "send_buf_size": 4096, 00:06:45.393 "enable_recv_pipe": true, 00:06:45.393 "enable_quickack": false, 00:06:45.393 "enable_placement_id": 0, 00:06:45.393 "enable_zerocopy_send_server": true, 00:06:45.393 "enable_zerocopy_send_client": false, 00:06:45.393 "zerocopy_threshold": 0, 00:06:45.393 "tls_version": 0, 00:06:45.393 "enable_ktls": false 00:06:45.393 } 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "method": "sock_impl_set_options", 00:06:45.393 "params": { 00:06:45.393 "impl_name": "posix", 00:06:45.393 "recv_buf_size": 2097152, 00:06:45.393 "send_buf_size": 2097152, 00:06:45.393 "enable_recv_pipe": true, 00:06:45.393 "enable_quickack": false, 00:06:45.393 "enable_placement_id": 0, 00:06:45.393 "enable_zerocopy_send_server": true, 00:06:45.393 "enable_zerocopy_send_client": false, 00:06:45.393 "zerocopy_threshold": 0, 00:06:45.393 "tls_version": 0, 00:06:45.393 "enable_ktls": false 00:06:45.393 } 00:06:45.393 } 00:06:45.393 ] 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "subsystem": "vmd", 00:06:45.393 "config": [] 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "subsystem": "accel", 00:06:45.393 "config": [ 00:06:45.393 { 00:06:45.393 "method": "accel_set_options", 00:06:45.393 "params": { 00:06:45.393 "small_cache_size": 128, 00:06:45.393 "large_cache_size": 16, 00:06:45.393 "task_count": 2048, 00:06:45.393 "sequence_count": 2048, 00:06:45.393 "buf_count": 2048 00:06:45.393 } 00:06:45.393 } 00:06:45.393 ] 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "subsystem": "bdev", 00:06:45.393 "config": [ 00:06:45.393 { 00:06:45.393 "method": "bdev_set_options", 00:06:45.393 "params": { 00:06:45.393 "bdev_io_pool_size": 65535, 00:06:45.393 "bdev_io_cache_size": 256, 00:06:45.393 "bdev_auto_examine": true, 00:06:45.393 "iobuf_small_cache_size": 128, 00:06:45.393 "iobuf_large_cache_size": 16 00:06:45.393 } 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "method": "bdev_raid_set_options", 00:06:45.393 "params": { 00:06:45.393 "process_window_size_kb": 1024, 00:06:45.393 "process_max_bandwidth_mb_sec": 0 00:06:45.393 } 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "method": "bdev_iscsi_set_options", 00:06:45.393 "params": { 00:06:45.393 "timeout_sec": 30 00:06:45.393 } 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "method": "bdev_nvme_set_options", 00:06:45.393 "params": { 00:06:45.393 "action_on_timeout": "none", 00:06:45.393 "timeout_us": 0, 00:06:45.393 "timeout_admin_us": 0, 00:06:45.393 "keep_alive_timeout_ms": 10000, 00:06:45.393 "arbitration_burst": 0, 00:06:45.393 "low_priority_weight": 0, 00:06:45.393 "medium_priority_weight": 0, 00:06:45.393 "high_priority_weight": 0, 00:06:45.393 "nvme_adminq_poll_period_us": 10000, 00:06:45.393 "nvme_ioq_poll_period_us": 0, 00:06:45.393 "io_queue_requests": 0, 00:06:45.393 "delay_cmd_submit": true, 00:06:45.393 "transport_retry_count": 4, 00:06:45.393 "bdev_retry_count": 3, 00:06:45.393 "transport_ack_timeout": 0, 00:06:45.393 "ctrlr_loss_timeout_sec": 0, 00:06:45.393 "reconnect_delay_sec": 0, 00:06:45.393 "fast_io_fail_timeout_sec": 0, 00:06:45.393 "disable_auto_failback": false, 00:06:45.393 "generate_uuids": false, 00:06:45.393 "transport_tos": 0, 00:06:45.393 "nvme_error_stat": false, 00:06:45.393 "rdma_srq_size": 0, 00:06:45.393 "io_path_stat": false, 00:06:45.393 "allow_accel_sequence": false, 00:06:45.393 "rdma_max_cq_size": 0, 00:06:45.393 "rdma_cm_event_timeout_ms": 0, 00:06:45.393 "dhchap_digests": [ 00:06:45.393 "sha256", 00:06:45.393 "sha384", 00:06:45.393 "sha512" 00:06:45.393 ], 00:06:45.393 "dhchap_dhgroups": [ 00:06:45.393 "null", 00:06:45.393 "ffdhe2048", 00:06:45.393 "ffdhe3072", 00:06:45.393 "ffdhe4096", 00:06:45.393 "ffdhe6144", 00:06:45.393 "ffdhe8192" 00:06:45.393 ] 00:06:45.393 } 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "method": "bdev_nvme_set_hotplug", 00:06:45.393 "params": { 00:06:45.393 "period_us": 100000, 00:06:45.393 "enable": false 00:06:45.393 } 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "method": "bdev_wait_for_examine" 00:06:45.393 } 00:06:45.393 ] 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "subsystem": "scsi", 00:06:45.393 "config": null 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "subsystem": "scheduler", 00:06:45.393 "config": [ 00:06:45.393 { 00:06:45.393 "method": "framework_set_scheduler", 00:06:45.393 "params": { 00:06:45.393 "name": "static" 00:06:45.393 } 00:06:45.393 } 00:06:45.393 ] 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "subsystem": "vhost_scsi", 00:06:45.393 "config": [] 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "subsystem": "vhost_blk", 00:06:45.393 "config": [] 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "subsystem": "ublk", 00:06:45.393 "config": [] 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "subsystem": "nbd", 00:06:45.393 "config": [] 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "subsystem": "nvmf", 00:06:45.393 "config": [ 00:06:45.393 { 00:06:45.393 "method": "nvmf_set_config", 00:06:45.393 "params": { 00:06:45.393 "discovery_filter": "match_any", 00:06:45.393 "admin_cmd_passthru": { 00:06:45.393 "identify_ctrlr": false 00:06:45.393 }, 00:06:45.393 "dhchap_digests": [ 00:06:45.393 "sha256", 00:06:45.393 "sha384", 00:06:45.393 "sha512" 00:06:45.393 ], 00:06:45.393 "dhchap_dhgroups": [ 00:06:45.393 "null", 00:06:45.393 "ffdhe2048", 00:06:45.393 "ffdhe3072", 00:06:45.393 "ffdhe4096", 00:06:45.393 "ffdhe6144", 00:06:45.393 "ffdhe8192" 00:06:45.393 ] 00:06:45.393 } 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "method": "nvmf_set_max_subsystems", 00:06:45.393 "params": { 00:06:45.393 "max_subsystems": 1024 00:06:45.393 } 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "method": "nvmf_set_crdt", 00:06:45.393 "params": { 00:06:45.393 "crdt1": 0, 00:06:45.393 "crdt2": 0, 00:06:45.393 "crdt3": 0 00:06:45.393 } 00:06:45.393 }, 00:06:45.393 { 00:06:45.393 "method": "nvmf_create_transport", 00:06:45.393 "params": { 00:06:45.393 "trtype": "TCP", 00:06:45.393 "max_queue_depth": 128, 00:06:45.393 "max_io_qpairs_per_ctrlr": 127, 00:06:45.393 "in_capsule_data_size": 4096, 00:06:45.393 "max_io_size": 131072, 00:06:45.393 "io_unit_size": 131072, 00:06:45.393 "max_aq_depth": 128, 00:06:45.393 "num_shared_buffers": 511, 00:06:45.393 "buf_cache_size": 4294967295, 00:06:45.394 "dif_insert_or_strip": false, 00:06:45.394 "zcopy": false, 00:06:45.394 "c2h_success": true, 00:06:45.394 "sock_priority": 0, 00:06:45.394 "abort_timeout_sec": 1, 00:06:45.394 "ack_timeout": 0, 00:06:45.394 "data_wr_pool_size": 0 00:06:45.394 } 00:06:45.394 } 00:06:45.394 ] 00:06:45.394 }, 00:06:45.394 { 00:06:45.394 "subsystem": "iscsi", 00:06:45.394 "config": [ 00:06:45.394 { 00:06:45.394 "method": "iscsi_set_options", 00:06:45.394 "params": { 00:06:45.394 "node_base": "iqn.2016-06.io.spdk", 00:06:45.394 "max_sessions": 128, 00:06:45.394 "max_connections_per_session": 2, 00:06:45.394 "max_queue_depth": 64, 00:06:45.394 "default_time2wait": 2, 00:06:45.394 "default_time2retain": 20, 00:06:45.394 "first_burst_length": 8192, 00:06:45.394 "immediate_data": true, 00:06:45.394 "allow_duplicated_isid": false, 00:06:45.394 "error_recovery_level": 0, 00:06:45.394 "nop_timeout": 60, 00:06:45.394 "nop_in_interval": 30, 00:06:45.394 "disable_chap": false, 00:06:45.394 "require_chap": false, 00:06:45.394 "mutual_chap": false, 00:06:45.394 "chap_group": 0, 00:06:45.394 "max_large_datain_per_connection": 64, 00:06:45.394 "max_r2t_per_connection": 4, 00:06:45.394 "pdu_pool_size": 36864, 00:06:45.394 "immediate_data_pool_size": 16384, 00:06:45.394 "data_out_pool_size": 2048 00:06:45.394 } 00:06:45.394 } 00:06:45.394 ] 00:06:45.394 } 00:06:45.394 ] 00:06:45.394 } 00:06:45.394 14:17:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:45.394 14:17:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2764504 00:06:45.394 14:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2764504 ']' 00:06:45.394 14:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2764504 00:06:45.394 14:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:45.394 14:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.394 14:17:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2764504 00:06:45.394 14:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.394 14:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.394 14:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2764504' 00:06:45.394 killing process with pid 2764504 00:06:45.394 14:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2764504 00:06:45.394 14:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2764504 00:06:47.307 14:17:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2765186 00:06:47.307 14:17:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:47.307 14:17:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:52.606 14:17:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2765186 00:06:52.606 14:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2765186 ']' 00:06:52.606 14:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2765186 00:06:52.606 14:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:52.606 14:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.606 14:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2765186 00:06:52.606 14:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.606 14:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.606 14:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2765186' 00:06:52.606 killing process with pid 2765186 00:06:52.606 14:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2765186 00:06:52.606 14:17:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2765186 00:06:53.993 14:17:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:53.993 14:17:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:53.993 00:06:53.993 real 0m9.806s 00:06:53.993 user 0m9.402s 00:06:53.993 sys 0m0.848s 00:06:53.993 14:17:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.993 14:17:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:53.993 ************************************ 00:06:53.993 END TEST skip_rpc_with_json 00:06:53.993 ************************************ 00:06:53.993 14:17:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:53.993 14:17:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.993 14:17:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.993 14:17:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.993 ************************************ 00:06:53.993 START TEST skip_rpc_with_delay 00:06:53.993 ************************************ 00:06:53.993 14:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:53.993 14:17:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:53.993 14:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:53.993 14:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:53.993 14:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:53.993 14:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.993 14:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:53.993 14:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.993 14:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:53.993 14:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.993 14:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:53.993 14:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:53.993 14:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:54.255 [2024-10-07 14:17:17.708652] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:54.255 [2024-10-07 14:17:17.708803] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:54.255 14:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:54.255 14:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.255 14:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:54.255 14:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.255 00:06:54.255 real 0m0.166s 00:06:54.255 user 0m0.088s 00:06:54.255 sys 0m0.077s 00:06:54.255 14:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.255 14:17:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:54.255 ************************************ 00:06:54.255 END TEST skip_rpc_with_delay 00:06:54.255 ************************************ 00:06:54.255 14:17:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:54.256 14:17:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:54.256 14:17:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:54.256 14:17:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.256 14:17:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.256 14:17:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.256 ************************************ 00:06:54.256 START TEST exit_on_failed_rpc_init 00:06:54.256 ************************************ 00:06:54.256 14:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:54.256 14:17:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2766591 00:06:54.256 14:17:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2766591 00:06:54.256 14:17:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.256 14:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2766591 ']' 00:06:54.256 14:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.256 14:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.256 14:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.256 14:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.256 14:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:54.516 [2024-10-07 14:17:17.967819] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:54.516 [2024-10-07 14:17:17.967953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2766591 ] 00:06:54.516 [2024-10-07 14:17:18.100606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.783 [2024-10-07 14:17:18.283389] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.358 14:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.358 14:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:55.358 14:17:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:55.358 14:17:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:55.358 14:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:55.358 14:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:55.358 14:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:55.358 14:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.358 14:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:55.358 14:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.358 14:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:55.358 14:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.359 14:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:55.359 14:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:55.359 14:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:55.359 [2024-10-07 14:17:19.038033] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:55.359 [2024-10-07 14:17:19.038146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2766925 ] 00:06:55.619 [2024-10-07 14:17:19.169427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.880 [2024-10-07 14:17:19.347316] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.880 [2024-10-07 14:17:19.347396] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:55.880 [2024-10-07 14:17:19.347413] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:55.880 [2024-10-07 14:17:19.347424] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.141 14:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:56.141 14:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.141 14:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:56.141 14:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:56.141 14:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:56.141 14:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.141 14:17:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:56.141 14:17:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2766591 00:06:56.141 14:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2766591 ']' 00:06:56.141 14:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2766591 00:06:56.141 14:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:56.141 14:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.141 14:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2766591 00:06:56.141 14:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.141 14:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.141 14:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2766591' 00:06:56.141 killing process with pid 2766591 00:06:56.141 14:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2766591 00:06:56.141 14:17:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2766591 00:06:58.057 00:06:58.057 real 0m3.575s 00:06:58.057 user 0m4.042s 00:06:58.057 sys 0m0.639s 00:06:58.057 14:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.057 14:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:58.057 ************************************ 00:06:58.057 END TEST exit_on_failed_rpc_init 00:06:58.057 ************************************ 00:06:58.057 14:17:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:58.057 00:06:58.057 real 0m20.839s 00:06:58.057 user 0m20.199s 00:06:58.057 sys 0m2.260s 00:06:58.057 14:17:21 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.057 14:17:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.057 ************************************ 00:06:58.057 END TEST skip_rpc 00:06:58.057 ************************************ 00:06:58.057 14:17:21 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:58.057 14:17:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.057 14:17:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.057 14:17:21 -- common/autotest_common.sh@10 -- # set +x 00:06:58.057 ************************************ 00:06:58.057 START TEST rpc_client 00:06:58.057 ************************************ 00:06:58.057 14:17:21 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:58.057 * Looking for test storage... 00:06:58.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:58.057 14:17:21 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:58.057 14:17:21 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:58.057 14:17:21 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:58.057 14:17:21 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.057 14:17:21 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:58.057 14:17:21 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.057 14:17:21 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:58.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.057 --rc genhtml_branch_coverage=1 00:06:58.057 --rc genhtml_function_coverage=1 00:06:58.057 --rc genhtml_legend=1 00:06:58.057 --rc geninfo_all_blocks=1 00:06:58.057 --rc geninfo_unexecuted_blocks=1 00:06:58.057 00:06:58.057 ' 00:06:58.057 14:17:21 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:58.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.057 --rc genhtml_branch_coverage=1 00:06:58.057 --rc genhtml_function_coverage=1 00:06:58.057 --rc genhtml_legend=1 00:06:58.057 --rc geninfo_all_blocks=1 00:06:58.057 --rc geninfo_unexecuted_blocks=1 00:06:58.057 00:06:58.057 ' 00:06:58.057 14:17:21 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:58.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.057 --rc genhtml_branch_coverage=1 00:06:58.057 --rc genhtml_function_coverage=1 00:06:58.057 --rc genhtml_legend=1 00:06:58.057 --rc geninfo_all_blocks=1 00:06:58.057 --rc geninfo_unexecuted_blocks=1 00:06:58.057 00:06:58.057 ' 00:06:58.057 14:17:21 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:58.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.057 --rc genhtml_branch_coverage=1 00:06:58.057 --rc genhtml_function_coverage=1 00:06:58.057 --rc genhtml_legend=1 00:06:58.057 --rc geninfo_all_blocks=1 00:06:58.057 --rc geninfo_unexecuted_blocks=1 00:06:58.057 00:06:58.057 ' 00:06:58.057 14:17:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:58.319 OK 00:06:58.319 14:17:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:58.319 00:06:58.319 real 0m0.266s 00:06:58.319 user 0m0.153s 00:06:58.319 sys 0m0.126s 00:06:58.319 14:17:21 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.319 14:17:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:58.319 ************************************ 00:06:58.319 END TEST rpc_client 00:06:58.319 ************************************ 00:06:58.319 14:17:21 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:58.319 14:17:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.319 14:17:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.319 14:17:21 -- common/autotest_common.sh@10 -- # set +x 00:06:58.319 ************************************ 00:06:58.319 START TEST json_config 00:06:58.319 ************************************ 00:06:58.319 14:17:21 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:58.319 14:17:21 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:58.319 14:17:21 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:58.319 14:17:21 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:58.582 14:17:22 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:58.582 14:17:22 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.582 14:17:22 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.582 14:17:22 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.582 14:17:22 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.582 14:17:22 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.582 14:17:22 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.582 14:17:22 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.582 14:17:22 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.582 14:17:22 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.582 14:17:22 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.582 14:17:22 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.582 14:17:22 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:58.582 14:17:22 json_config -- scripts/common.sh@345 -- # : 1 00:06:58.582 14:17:22 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.582 14:17:22 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.582 14:17:22 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:58.582 14:17:22 json_config -- scripts/common.sh@353 -- # local d=1 00:06:58.582 14:17:22 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.582 14:17:22 json_config -- scripts/common.sh@355 -- # echo 1 00:06:58.582 14:17:22 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.582 14:17:22 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:58.582 14:17:22 json_config -- scripts/common.sh@353 -- # local d=2 00:06:58.582 14:17:22 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.582 14:17:22 json_config -- scripts/common.sh@355 -- # echo 2 00:06:58.582 14:17:22 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.582 14:17:22 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.582 14:17:22 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.582 14:17:22 json_config -- scripts/common.sh@368 -- # return 0 00:06:58.582 14:17:22 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.582 14:17:22 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:58.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.582 --rc genhtml_branch_coverage=1 00:06:58.582 --rc genhtml_function_coverage=1 00:06:58.582 --rc genhtml_legend=1 00:06:58.582 --rc geninfo_all_blocks=1 00:06:58.582 --rc geninfo_unexecuted_blocks=1 00:06:58.582 00:06:58.582 ' 00:06:58.582 14:17:22 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:58.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.582 --rc genhtml_branch_coverage=1 00:06:58.582 --rc genhtml_function_coverage=1 00:06:58.582 --rc genhtml_legend=1 00:06:58.582 --rc geninfo_all_blocks=1 00:06:58.582 --rc geninfo_unexecuted_blocks=1 00:06:58.582 00:06:58.582 ' 00:06:58.582 14:17:22 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:58.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.582 --rc genhtml_branch_coverage=1 00:06:58.582 --rc genhtml_function_coverage=1 00:06:58.582 --rc genhtml_legend=1 00:06:58.582 --rc geninfo_all_blocks=1 00:06:58.582 --rc geninfo_unexecuted_blocks=1 00:06:58.582 00:06:58.582 ' 00:06:58.582 14:17:22 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:58.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.582 --rc genhtml_branch_coverage=1 00:06:58.582 --rc genhtml_function_coverage=1 00:06:58.582 --rc genhtml_legend=1 00:06:58.582 --rc geninfo_all_blocks=1 00:06:58.582 --rc geninfo_unexecuted_blocks=1 00:06:58.582 00:06:58.582 ' 00:06:58.582 14:17:22 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:58.582 14:17:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:58.582 14:17:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.582 14:17:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.582 14:17:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.582 14:17:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.582 14:17:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:58.582 14:17:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:58.582 14:17:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.582 14:17:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:58.582 14:17:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.582 14:17:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:58.583 14:17:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:58.583 14:17:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:58.583 14:17:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.583 14:17:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:58.583 14:17:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:58.583 14:17:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.583 14:17:22 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.583 14:17:22 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:58.583 14:17:22 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.583 14:17:22 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.583 14:17:22 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.583 14:17:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.583 14:17:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.583 14:17:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.583 14:17:22 json_config -- paths/export.sh@5 -- # export PATH 00:06:58.583 14:17:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.583 14:17:22 json_config -- nvmf/common.sh@51 -- # : 0 00:06:58.583 14:17:22 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:58.583 14:17:22 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:58.583 14:17:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:58.583 14:17:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.583 14:17:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.583 14:17:22 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:58.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:58.583 14:17:22 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:58.583 14:17:22 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:58.583 14:17:22 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:58.583 INFO: JSON configuration test init 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:58.583 14:17:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:58.583 14:17:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:58.583 14:17:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:58.583 14:17:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:58.583 14:17:22 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:58.583 14:17:22 json_config -- json_config/common.sh@9 -- # local app=target 00:06:58.583 14:17:22 json_config -- json_config/common.sh@10 -- # shift 00:06:58.583 14:17:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:58.583 14:17:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:58.583 14:17:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:58.583 14:17:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:58.583 14:17:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:58.583 14:17:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2767722 00:06:58.583 14:17:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:58.583 Waiting for target to run... 00:06:58.583 14:17:22 json_config -- json_config/common.sh@25 -- # waitforlisten 2767722 /var/tmp/spdk_tgt.sock 00:06:58.583 14:17:22 json_config -- common/autotest_common.sh@831 -- # '[' -z 2767722 ']' 00:06:58.583 14:17:22 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:58.583 14:17:22 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.583 14:17:22 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:58.583 14:17:22 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:58.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:58.583 14:17:22 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.583 14:17:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:58.583 [2024-10-07 14:17:22.233317] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:58.583 [2024-10-07 14:17:22.233444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2767722 ] 00:06:59.156 [2024-10-07 14:17:22.581909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.156 [2024-10-07 14:17:22.760859] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.417 14:17:22 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.417 14:17:22 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:59.417 14:17:22 json_config -- json_config/common.sh@26 -- # echo '' 00:06:59.417 00:06:59.417 14:17:22 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:59.417 14:17:22 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:59.417 14:17:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:59.417 14:17:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:59.417 14:17:22 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:59.417 14:17:22 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:59.417 14:17:22 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:59.417 14:17:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:59.417 14:17:23 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:59.417 14:17:23 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:59.417 14:17:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:00.805 14:17:24 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:00.805 14:17:24 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:00.805 14:17:24 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:00.805 14:17:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.805 14:17:24 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:00.805 14:17:24 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:00.805 14:17:24 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:00.805 14:17:24 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:00.805 14:17:24 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:00.805 14:17:24 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:00.805 14:17:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:00.805 14:17:24 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:00.805 14:17:24 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:00.805 14:17:24 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:00.805 14:17:24 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:00.805 14:17:24 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:00.805 14:17:24 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:00.805 14:17:24 json_config -- json_config/json_config.sh@54 -- # sort 00:07:00.805 14:17:24 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:00.805 14:17:24 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:00.806 14:17:24 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:00.806 14:17:24 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:00.806 14:17:24 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:00.806 14:17:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.806 14:17:24 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:00.806 14:17:24 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:00.806 14:17:24 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:00.806 14:17:24 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:00.806 14:17:24 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:00.806 14:17:24 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:00.806 14:17:24 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:00.806 14:17:24 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:00.806 14:17:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.806 14:17:24 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:00.806 14:17:24 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:07:00.806 14:17:24 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:07:00.806 14:17:24 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:00.806 14:17:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:00.806 MallocForNvmf0 00:07:00.806 14:17:24 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:00.806 14:17:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:01.067 MallocForNvmf1 00:07:01.067 14:17:24 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:01.067 14:17:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:01.328 [2024-10-07 14:17:24.828934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.328 14:17:24 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:01.328 14:17:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:01.328 14:17:25 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:01.328 14:17:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:01.589 14:17:25 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:01.589 14:17:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:01.851 14:17:25 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:01.851 14:17:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:01.851 [2024-10-07 14:17:25.519309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:01.851 14:17:25 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:01.851 14:17:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:01.851 14:17:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.112 14:17:25 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:02.112 14:17:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:02.112 14:17:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.112 14:17:25 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:02.112 14:17:25 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:02.112 14:17:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:02.112 MallocBdevForConfigChangeCheck 00:07:02.112 14:17:25 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:02.112 14:17:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:02.112 14:17:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.373 14:17:25 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:02.373 14:17:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:02.635 14:17:26 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:02.635 INFO: shutting down applications... 00:07:02.635 14:17:26 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:02.635 14:17:26 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:02.635 14:17:26 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:02.635 14:17:26 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:02.896 Calling clear_iscsi_subsystem 00:07:02.896 Calling clear_nvmf_subsystem 00:07:02.896 Calling clear_nbd_subsystem 00:07:02.896 Calling clear_ublk_subsystem 00:07:02.896 Calling clear_vhost_blk_subsystem 00:07:02.896 Calling clear_vhost_scsi_subsystem 00:07:02.896 Calling clear_bdev_subsystem 00:07:02.897 14:17:26 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:02.897 14:17:26 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:02.897 14:17:26 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:02.897 14:17:26 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:02.897 14:17:26 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:02.897 14:17:26 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:03.468 14:17:26 json_config -- json_config/json_config.sh@352 -- # break 00:07:03.468 14:17:26 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:03.468 14:17:26 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:03.468 14:17:26 json_config -- json_config/common.sh@31 -- # local app=target 00:07:03.468 14:17:26 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:03.468 14:17:26 json_config -- json_config/common.sh@35 -- # [[ -n 2767722 ]] 00:07:03.468 14:17:26 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2767722 00:07:03.468 14:17:26 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:03.468 14:17:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:03.468 14:17:26 json_config -- json_config/common.sh@41 -- # kill -0 2767722 00:07:03.468 14:17:26 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:03.728 14:17:27 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:03.728 14:17:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:03.728 14:17:27 json_config -- json_config/common.sh@41 -- # kill -0 2767722 00:07:03.728 14:17:27 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:04.378 14:17:27 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:04.378 14:17:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:04.378 14:17:27 json_config -- json_config/common.sh@41 -- # kill -0 2767722 00:07:04.378 14:17:27 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:04.378 14:17:27 json_config -- json_config/common.sh@43 -- # break 00:07:04.378 14:17:27 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:04.378 14:17:27 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:04.378 SPDK target shutdown done 00:07:04.378 14:17:27 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:04.378 INFO: relaunching applications... 00:07:04.378 14:17:27 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:04.378 14:17:27 json_config -- json_config/common.sh@9 -- # local app=target 00:07:04.378 14:17:27 json_config -- json_config/common.sh@10 -- # shift 00:07:04.378 14:17:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:04.378 14:17:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:04.378 14:17:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:04.378 14:17:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:04.378 14:17:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:04.378 14:17:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2768877 00:07:04.378 14:17:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:04.378 Waiting for target to run... 00:07:04.378 14:17:27 json_config -- json_config/common.sh@25 -- # waitforlisten 2768877 /var/tmp/spdk_tgt.sock 00:07:04.378 14:17:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:04.378 14:17:27 json_config -- common/autotest_common.sh@831 -- # '[' -z 2768877 ']' 00:07:04.378 14:17:27 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:04.378 14:17:27 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.378 14:17:27 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:04.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:04.378 14:17:27 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.378 14:17:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.378 [2024-10-07 14:17:28.015967] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:04.378 [2024-10-07 14:17:28.016092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2768877 ] 00:07:05.010 [2024-10-07 14:17:28.372798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.010 [2024-10-07 14:17:28.551434] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.957 [2024-10-07 14:17:29.557456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.957 [2024-10-07 14:17:29.589897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:05.957 14:17:29 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.957 14:17:29 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:05.957 14:17:29 json_config -- json_config/common.sh@26 -- # echo '' 00:07:05.957 00:07:05.957 14:17:29 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:05.957 14:17:29 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:05.957 INFO: Checking if target configuration is the same... 00:07:05.957 14:17:29 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:05.957 14:17:29 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:05.957 14:17:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:05.957 + '[' 2 -ne 2 ']' 00:07:05.957 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:05.957 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:05.957 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:05.957 +++ basename /dev/fd/62 00:07:05.957 ++ mktemp /tmp/62.XXX 00:07:05.957 + tmp_file_1=/tmp/62.nPN 00:07:05.957 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:05.957 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:05.957 + tmp_file_2=/tmp/spdk_tgt_config.json.Gex 00:07:05.957 + ret=0 00:07:05.957 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:06.529 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:06.529 + diff -u /tmp/62.nPN /tmp/spdk_tgt_config.json.Gex 00:07:06.529 + echo 'INFO: JSON config files are the same' 00:07:06.529 INFO: JSON config files are the same 00:07:06.529 + rm /tmp/62.nPN /tmp/spdk_tgt_config.json.Gex 00:07:06.529 + exit 0 00:07:06.529 14:17:29 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:06.529 14:17:29 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:06.529 INFO: changing configuration and checking if this can be detected... 00:07:06.529 14:17:29 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:06.529 14:17:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:06.529 14:17:30 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:06.529 14:17:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:06.529 14:17:30 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:06.529 + '[' 2 -ne 2 ']' 00:07:06.529 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:06.529 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:06.529 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:06.529 +++ basename /dev/fd/62 00:07:06.529 ++ mktemp /tmp/62.XXX 00:07:06.529 + tmp_file_1=/tmp/62.Dzr 00:07:06.529 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:06.529 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:06.529 + tmp_file_2=/tmp/spdk_tgt_config.json.Pwf 00:07:06.529 + ret=0 00:07:06.529 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:07.101 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:07.101 + diff -u /tmp/62.Dzr /tmp/spdk_tgt_config.json.Pwf 00:07:07.101 + ret=1 00:07:07.101 + echo '=== Start of file: /tmp/62.Dzr ===' 00:07:07.101 + cat /tmp/62.Dzr 00:07:07.101 + echo '=== End of file: /tmp/62.Dzr ===' 00:07:07.101 + echo '' 00:07:07.101 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Pwf ===' 00:07:07.101 + cat /tmp/spdk_tgt_config.json.Pwf 00:07:07.101 + echo '=== End of file: /tmp/spdk_tgt_config.json.Pwf ===' 00:07:07.101 + echo '' 00:07:07.101 + rm /tmp/62.Dzr /tmp/spdk_tgt_config.json.Pwf 00:07:07.101 + exit 1 00:07:07.101 14:17:30 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:07.101 INFO: configuration change detected. 00:07:07.101 14:17:30 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:07.101 14:17:30 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:07.101 14:17:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:07.101 14:17:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:07.101 14:17:30 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:07.101 14:17:30 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:07.101 14:17:30 json_config -- json_config/json_config.sh@324 -- # [[ -n 2768877 ]] 00:07:07.101 14:17:30 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:07.101 14:17:30 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:07.101 14:17:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:07.101 14:17:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:07.101 14:17:30 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:07.101 14:17:30 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:07.101 14:17:30 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:07.101 14:17:30 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:07.101 14:17:30 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:07.101 14:17:30 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:07.101 14:17:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:07.101 14:17:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:07.101 14:17:30 json_config -- json_config/json_config.sh@330 -- # killprocess 2768877 00:07:07.101 14:17:30 json_config -- common/autotest_common.sh@950 -- # '[' -z 2768877 ']' 00:07:07.101 14:17:30 json_config -- common/autotest_common.sh@954 -- # kill -0 2768877 00:07:07.101 14:17:30 json_config -- common/autotest_common.sh@955 -- # uname 00:07:07.101 14:17:30 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.101 14:17:30 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2768877 00:07:07.101 14:17:30 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.101 14:17:30 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.101 14:17:30 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2768877' 00:07:07.101 killing process with pid 2768877 00:07:07.101 14:17:30 json_config -- common/autotest_common.sh@969 -- # kill 2768877 00:07:07.102 14:17:30 json_config -- common/autotest_common.sh@974 -- # wait 2768877 00:07:08.044 14:17:31 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:08.044 14:17:31 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:08.044 14:17:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:08.044 14:17:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:08.044 14:17:31 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:08.044 14:17:31 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:08.044 INFO: Success 00:07:08.044 00:07:08.044 real 0m9.693s 00:07:08.044 user 0m10.932s 00:07:08.044 sys 0m2.329s 00:07:08.044 14:17:31 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.044 14:17:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:08.044 ************************************ 00:07:08.044 END TEST json_config 00:07:08.044 ************************************ 00:07:08.044 14:17:31 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:08.044 14:17:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.044 14:17:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.044 14:17:31 -- common/autotest_common.sh@10 -- # set +x 00:07:08.044 ************************************ 00:07:08.044 START TEST json_config_extra_key 00:07:08.044 ************************************ 00:07:08.044 14:17:31 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:08.044 14:17:31 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:08.044 14:17:31 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:07:08.044 14:17:31 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:08.307 14:17:31 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:08.307 14:17:31 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.307 14:17:31 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:08.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.307 --rc genhtml_branch_coverage=1 00:07:08.307 --rc genhtml_function_coverage=1 00:07:08.307 --rc genhtml_legend=1 00:07:08.307 --rc geninfo_all_blocks=1 00:07:08.307 --rc geninfo_unexecuted_blocks=1 00:07:08.307 00:07:08.307 ' 00:07:08.307 14:17:31 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:08.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.307 --rc genhtml_branch_coverage=1 00:07:08.307 --rc genhtml_function_coverage=1 00:07:08.307 --rc genhtml_legend=1 00:07:08.307 --rc geninfo_all_blocks=1 00:07:08.307 --rc geninfo_unexecuted_blocks=1 00:07:08.307 00:07:08.307 ' 00:07:08.307 14:17:31 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:08.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.307 --rc genhtml_branch_coverage=1 00:07:08.307 --rc genhtml_function_coverage=1 00:07:08.307 --rc genhtml_legend=1 00:07:08.307 --rc geninfo_all_blocks=1 00:07:08.307 --rc geninfo_unexecuted_blocks=1 00:07:08.307 00:07:08.307 ' 00:07:08.307 14:17:31 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:08.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.307 --rc genhtml_branch_coverage=1 00:07:08.307 --rc genhtml_function_coverage=1 00:07:08.307 --rc genhtml_legend=1 00:07:08.307 --rc geninfo_all_blocks=1 00:07:08.307 --rc geninfo_unexecuted_blocks=1 00:07:08.307 00:07:08.307 ' 00:07:08.307 14:17:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.307 14:17:31 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.307 14:17:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.307 14:17:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.307 14:17:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.307 14:17:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:08.307 14:17:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:08.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:08.307 14:17:31 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:08.307 14:17:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:08.307 14:17:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:08.307 14:17:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:08.308 14:17:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:08.308 14:17:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:08.308 14:17:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:08.308 14:17:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:08.308 14:17:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:08.308 14:17:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:08.308 14:17:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:08.308 14:17:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:08.308 INFO: launching applications... 00:07:08.308 14:17:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:08.308 14:17:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:08.308 14:17:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:08.308 14:17:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:08.308 14:17:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:08.308 14:17:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:08.308 14:17:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:08.308 14:17:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:08.308 14:17:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2769862 00:07:08.308 14:17:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:08.308 Waiting for target to run... 00:07:08.308 14:17:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2769862 /var/tmp/spdk_tgt.sock 00:07:08.308 14:17:31 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2769862 ']' 00:07:08.308 14:17:31 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:08.308 14:17:31 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:08.308 14:17:31 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.308 14:17:31 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:08.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:08.308 14:17:31 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.308 14:17:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:08.308 [2024-10-07 14:17:31.987545] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:08.308 [2024-10-07 14:17:31.987687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2769862 ] 00:07:08.881 [2024-10-07 14:17:32.353164] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.881 [2024-10-07 14:17:32.524494] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.452 14:17:33 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.452 14:17:33 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:09.452 14:17:33 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:09.452 00:07:09.452 14:17:33 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:09.452 INFO: shutting down applications... 00:07:09.452 14:17:33 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:09.452 14:17:33 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:09.452 14:17:33 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:09.452 14:17:33 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2769862 ]] 00:07:09.452 14:17:33 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2769862 00:07:09.452 14:17:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:09.452 14:17:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:09.452 14:17:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2769862 00:07:09.452 14:17:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:10.023 14:17:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:10.023 14:17:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:10.023 14:17:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2769862 00:07:10.023 14:17:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:10.594 14:17:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:10.594 14:17:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:10.594 14:17:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2769862 00:07:10.594 14:17:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:11.166 14:17:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:11.166 14:17:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:11.166 14:17:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2769862 00:07:11.166 14:17:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:11.427 14:17:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:11.427 14:17:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:11.427 14:17:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2769862 00:07:11.427 14:17:35 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:11.427 14:17:35 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:11.427 14:17:35 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:11.427 14:17:35 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:11.427 SPDK target shutdown done 00:07:11.427 14:17:35 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:11.427 Success 00:07:11.427 00:07:11.427 real 0m3.399s 00:07:11.427 user 0m3.003s 00:07:11.427 sys 0m0.588s 00:07:11.427 14:17:35 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.427 14:17:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:11.427 ************************************ 00:07:11.427 END TEST json_config_extra_key 00:07:11.427 ************************************ 00:07:11.427 14:17:35 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:11.427 14:17:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.427 14:17:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.427 14:17:35 -- common/autotest_common.sh@10 -- # set +x 00:07:11.688 ************************************ 00:07:11.688 START TEST alias_rpc 00:07:11.688 ************************************ 00:07:11.688 14:17:35 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:11.688 * Looking for test storage... 00:07:11.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:11.688 14:17:35 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:11.688 14:17:35 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:11.688 14:17:35 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:11.688 14:17:35 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:11.688 14:17:35 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.688 14:17:35 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.688 14:17:35 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.688 14:17:35 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.688 14:17:35 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.688 14:17:35 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.689 14:17:35 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:11.689 14:17:35 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.689 14:17:35 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:11.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.689 --rc genhtml_branch_coverage=1 00:07:11.689 --rc genhtml_function_coverage=1 00:07:11.689 --rc genhtml_legend=1 00:07:11.689 --rc geninfo_all_blocks=1 00:07:11.689 --rc geninfo_unexecuted_blocks=1 00:07:11.689 00:07:11.689 ' 00:07:11.689 14:17:35 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:11.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.689 --rc genhtml_branch_coverage=1 00:07:11.689 --rc genhtml_function_coverage=1 00:07:11.689 --rc genhtml_legend=1 00:07:11.689 --rc geninfo_all_blocks=1 00:07:11.689 --rc geninfo_unexecuted_blocks=1 00:07:11.689 00:07:11.689 ' 00:07:11.689 14:17:35 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:11.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.689 --rc genhtml_branch_coverage=1 00:07:11.689 --rc genhtml_function_coverage=1 00:07:11.689 --rc genhtml_legend=1 00:07:11.689 --rc geninfo_all_blocks=1 00:07:11.689 --rc geninfo_unexecuted_blocks=1 00:07:11.689 00:07:11.689 ' 00:07:11.689 14:17:35 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:11.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.689 --rc genhtml_branch_coverage=1 00:07:11.689 --rc genhtml_function_coverage=1 00:07:11.689 --rc genhtml_legend=1 00:07:11.689 --rc geninfo_all_blocks=1 00:07:11.689 --rc geninfo_unexecuted_blocks=1 00:07:11.689 00:07:11.689 ' 00:07:11.689 14:17:35 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:11.689 14:17:35 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2770638 00:07:11.689 14:17:35 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2770638 00:07:11.689 14:17:35 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:11.689 14:17:35 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2770638 ']' 00:07:11.689 14:17:35 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.689 14:17:35 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.689 14:17:35 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.689 14:17:35 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.689 14:17:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.949 [2024-10-07 14:17:35.457187] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:11.949 [2024-10-07 14:17:35.457321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2770638 ] 00:07:11.949 [2024-10-07 14:17:35.586360] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.209 [2024-10-07 14:17:35.767648] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.779 14:17:36 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.779 14:17:36 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:12.779 14:17:36 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:13.039 14:17:36 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2770638 00:07:13.039 14:17:36 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2770638 ']' 00:07:13.039 14:17:36 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2770638 00:07:13.039 14:17:36 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:13.039 14:17:36 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.039 14:17:36 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2770638 00:07:13.039 14:17:36 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.039 14:17:36 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.039 14:17:36 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2770638' 00:07:13.039 killing process with pid 2770638 00:07:13.039 14:17:36 alias_rpc -- common/autotest_common.sh@969 -- # kill 2770638 00:07:13.039 14:17:36 alias_rpc -- common/autotest_common.sh@974 -- # wait 2770638 00:07:14.952 00:07:14.952 real 0m3.235s 00:07:14.952 user 0m3.257s 00:07:14.952 sys 0m0.547s 00:07:14.952 14:17:38 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.952 14:17:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.952 ************************************ 00:07:14.952 END TEST alias_rpc 00:07:14.952 ************************************ 00:07:14.952 14:17:38 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:14.952 14:17:38 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:14.952 14:17:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.952 14:17:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.952 14:17:38 -- common/autotest_common.sh@10 -- # set +x 00:07:14.952 ************************************ 00:07:14.952 START TEST spdkcli_tcp 00:07:14.952 ************************************ 00:07:14.952 14:17:38 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:14.952 * Looking for test storage... 00:07:14.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:14.952 14:17:38 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:14.952 14:17:38 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:07:14.952 14:17:38 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:14.952 14:17:38 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.952 14:17:38 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:14.952 14:17:38 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.952 14:17:38 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:14.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.952 --rc genhtml_branch_coverage=1 00:07:14.952 --rc genhtml_function_coverage=1 00:07:14.952 --rc genhtml_legend=1 00:07:14.952 --rc geninfo_all_blocks=1 00:07:14.952 --rc geninfo_unexecuted_blocks=1 00:07:14.952 00:07:14.952 ' 00:07:14.952 14:17:38 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:14.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.952 --rc genhtml_branch_coverage=1 00:07:14.952 --rc genhtml_function_coverage=1 00:07:14.952 --rc genhtml_legend=1 00:07:14.952 --rc geninfo_all_blocks=1 00:07:14.952 --rc geninfo_unexecuted_blocks=1 00:07:14.952 00:07:14.952 ' 00:07:14.952 14:17:38 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:14.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.952 --rc genhtml_branch_coverage=1 00:07:14.952 --rc genhtml_function_coverage=1 00:07:14.952 --rc genhtml_legend=1 00:07:14.952 --rc geninfo_all_blocks=1 00:07:14.952 --rc geninfo_unexecuted_blocks=1 00:07:14.952 00:07:14.952 ' 00:07:14.952 14:17:38 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:14.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.952 --rc genhtml_branch_coverage=1 00:07:14.952 --rc genhtml_function_coverage=1 00:07:14.952 --rc genhtml_legend=1 00:07:14.952 --rc geninfo_all_blocks=1 00:07:14.952 --rc geninfo_unexecuted_blocks=1 00:07:14.952 00:07:14.952 ' 00:07:14.952 14:17:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:14.952 14:17:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:14.952 14:17:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:14.952 14:17:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:14.952 14:17:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:14.952 14:17:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:14.952 14:17:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:14.952 14:17:38 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:14.952 14:17:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:14.952 14:17:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2771324 00:07:14.952 14:17:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2771324 00:07:14.952 14:17:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:14.952 14:17:38 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2771324 ']' 00:07:14.952 14:17:38 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.952 14:17:38 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.952 14:17:38 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.953 14:17:38 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.953 14:17:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.214 [2024-10-07 14:17:38.749490] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:15.214 [2024-10-07 14:17:38.749633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2771324 ] 00:07:15.214 [2024-10-07 14:17:38.882082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:15.475 [2024-10-07 14:17:39.065010] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.475 [2024-10-07 14:17:39.065046] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.045 14:17:39 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.045 14:17:39 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:16.045 14:17:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2771490 00:07:16.045 14:17:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:16.045 14:17:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:16.306 [ 00:07:16.306 "bdev_malloc_delete", 00:07:16.306 "bdev_malloc_create", 00:07:16.306 "bdev_null_resize", 00:07:16.306 "bdev_null_delete", 00:07:16.306 "bdev_null_create", 00:07:16.306 "bdev_nvme_cuse_unregister", 00:07:16.306 "bdev_nvme_cuse_register", 00:07:16.306 "bdev_opal_new_user", 00:07:16.306 "bdev_opal_set_lock_state", 00:07:16.306 "bdev_opal_delete", 00:07:16.306 "bdev_opal_get_info", 00:07:16.306 "bdev_opal_create", 00:07:16.306 "bdev_nvme_opal_revert", 00:07:16.306 "bdev_nvme_opal_init", 00:07:16.306 "bdev_nvme_send_cmd", 00:07:16.306 "bdev_nvme_set_keys", 00:07:16.306 "bdev_nvme_get_path_iostat", 00:07:16.306 "bdev_nvme_get_mdns_discovery_info", 00:07:16.306 "bdev_nvme_stop_mdns_discovery", 00:07:16.306 "bdev_nvme_start_mdns_discovery", 00:07:16.306 "bdev_nvme_set_multipath_policy", 00:07:16.306 "bdev_nvme_set_preferred_path", 00:07:16.306 "bdev_nvme_get_io_paths", 00:07:16.306 "bdev_nvme_remove_error_injection", 00:07:16.306 "bdev_nvme_add_error_injection", 00:07:16.306 "bdev_nvme_get_discovery_info", 00:07:16.306 "bdev_nvme_stop_discovery", 00:07:16.306 "bdev_nvme_start_discovery", 00:07:16.306 "bdev_nvme_get_controller_health_info", 00:07:16.306 "bdev_nvme_disable_controller", 00:07:16.306 "bdev_nvme_enable_controller", 00:07:16.306 "bdev_nvme_reset_controller", 00:07:16.306 "bdev_nvme_get_transport_statistics", 00:07:16.306 "bdev_nvme_apply_firmware", 00:07:16.306 "bdev_nvme_detach_controller", 00:07:16.306 "bdev_nvme_get_controllers", 00:07:16.306 "bdev_nvme_attach_controller", 00:07:16.306 "bdev_nvme_set_hotplug", 00:07:16.306 "bdev_nvme_set_options", 00:07:16.306 "bdev_passthru_delete", 00:07:16.306 "bdev_passthru_create", 00:07:16.306 "bdev_lvol_set_parent_bdev", 00:07:16.306 "bdev_lvol_set_parent", 00:07:16.306 "bdev_lvol_check_shallow_copy", 00:07:16.306 "bdev_lvol_start_shallow_copy", 00:07:16.306 "bdev_lvol_grow_lvstore", 00:07:16.306 "bdev_lvol_get_lvols", 00:07:16.306 "bdev_lvol_get_lvstores", 00:07:16.306 "bdev_lvol_delete", 00:07:16.306 "bdev_lvol_set_read_only", 00:07:16.306 "bdev_lvol_resize", 00:07:16.306 "bdev_lvol_decouple_parent", 00:07:16.306 "bdev_lvol_inflate", 00:07:16.306 "bdev_lvol_rename", 00:07:16.306 "bdev_lvol_clone_bdev", 00:07:16.306 "bdev_lvol_clone", 00:07:16.306 "bdev_lvol_snapshot", 00:07:16.306 "bdev_lvol_create", 00:07:16.306 "bdev_lvol_delete_lvstore", 00:07:16.306 "bdev_lvol_rename_lvstore", 00:07:16.306 "bdev_lvol_create_lvstore", 00:07:16.306 "bdev_raid_set_options", 00:07:16.306 "bdev_raid_remove_base_bdev", 00:07:16.306 "bdev_raid_add_base_bdev", 00:07:16.306 "bdev_raid_delete", 00:07:16.306 "bdev_raid_create", 00:07:16.306 "bdev_raid_get_bdevs", 00:07:16.306 "bdev_error_inject_error", 00:07:16.306 "bdev_error_delete", 00:07:16.306 "bdev_error_create", 00:07:16.306 "bdev_split_delete", 00:07:16.306 "bdev_split_create", 00:07:16.306 "bdev_delay_delete", 00:07:16.306 "bdev_delay_create", 00:07:16.306 "bdev_delay_update_latency", 00:07:16.306 "bdev_zone_block_delete", 00:07:16.306 "bdev_zone_block_create", 00:07:16.306 "blobfs_create", 00:07:16.306 "blobfs_detect", 00:07:16.306 "blobfs_set_cache_size", 00:07:16.306 "bdev_aio_delete", 00:07:16.306 "bdev_aio_rescan", 00:07:16.306 "bdev_aio_create", 00:07:16.306 "bdev_ftl_set_property", 00:07:16.306 "bdev_ftl_get_properties", 00:07:16.306 "bdev_ftl_get_stats", 00:07:16.306 "bdev_ftl_unmap", 00:07:16.306 "bdev_ftl_unload", 00:07:16.306 "bdev_ftl_delete", 00:07:16.306 "bdev_ftl_load", 00:07:16.306 "bdev_ftl_create", 00:07:16.306 "bdev_virtio_attach_controller", 00:07:16.306 "bdev_virtio_scsi_get_devices", 00:07:16.306 "bdev_virtio_detach_controller", 00:07:16.306 "bdev_virtio_blk_set_hotplug", 00:07:16.306 "bdev_iscsi_delete", 00:07:16.306 "bdev_iscsi_create", 00:07:16.306 "bdev_iscsi_set_options", 00:07:16.306 "accel_error_inject_error", 00:07:16.306 "ioat_scan_accel_module", 00:07:16.306 "dsa_scan_accel_module", 00:07:16.306 "iaa_scan_accel_module", 00:07:16.306 "keyring_file_remove_key", 00:07:16.306 "keyring_file_add_key", 00:07:16.306 "keyring_linux_set_options", 00:07:16.306 "fsdev_aio_delete", 00:07:16.306 "fsdev_aio_create", 00:07:16.306 "iscsi_get_histogram", 00:07:16.306 "iscsi_enable_histogram", 00:07:16.306 "iscsi_set_options", 00:07:16.306 "iscsi_get_auth_groups", 00:07:16.306 "iscsi_auth_group_remove_secret", 00:07:16.306 "iscsi_auth_group_add_secret", 00:07:16.306 "iscsi_delete_auth_group", 00:07:16.306 "iscsi_create_auth_group", 00:07:16.306 "iscsi_set_discovery_auth", 00:07:16.306 "iscsi_get_options", 00:07:16.306 "iscsi_target_node_request_logout", 00:07:16.306 "iscsi_target_node_set_redirect", 00:07:16.306 "iscsi_target_node_set_auth", 00:07:16.306 "iscsi_target_node_add_lun", 00:07:16.306 "iscsi_get_stats", 00:07:16.306 "iscsi_get_connections", 00:07:16.306 "iscsi_portal_group_set_auth", 00:07:16.306 "iscsi_start_portal_group", 00:07:16.306 "iscsi_delete_portal_group", 00:07:16.306 "iscsi_create_portal_group", 00:07:16.306 "iscsi_get_portal_groups", 00:07:16.306 "iscsi_delete_target_node", 00:07:16.306 "iscsi_target_node_remove_pg_ig_maps", 00:07:16.306 "iscsi_target_node_add_pg_ig_maps", 00:07:16.306 "iscsi_create_target_node", 00:07:16.307 "iscsi_get_target_nodes", 00:07:16.307 "iscsi_delete_initiator_group", 00:07:16.307 "iscsi_initiator_group_remove_initiators", 00:07:16.307 "iscsi_initiator_group_add_initiators", 00:07:16.307 "iscsi_create_initiator_group", 00:07:16.307 "iscsi_get_initiator_groups", 00:07:16.307 "nvmf_set_crdt", 00:07:16.307 "nvmf_set_config", 00:07:16.307 "nvmf_set_max_subsystems", 00:07:16.307 "nvmf_stop_mdns_prr", 00:07:16.307 "nvmf_publish_mdns_prr", 00:07:16.307 "nvmf_subsystem_get_listeners", 00:07:16.307 "nvmf_subsystem_get_qpairs", 00:07:16.307 "nvmf_subsystem_get_controllers", 00:07:16.307 "nvmf_get_stats", 00:07:16.307 "nvmf_get_transports", 00:07:16.307 "nvmf_create_transport", 00:07:16.307 "nvmf_get_targets", 00:07:16.307 "nvmf_delete_target", 00:07:16.307 "nvmf_create_target", 00:07:16.307 "nvmf_subsystem_allow_any_host", 00:07:16.307 "nvmf_subsystem_set_keys", 00:07:16.307 "nvmf_subsystem_remove_host", 00:07:16.307 "nvmf_subsystem_add_host", 00:07:16.307 "nvmf_ns_remove_host", 00:07:16.307 "nvmf_ns_add_host", 00:07:16.307 "nvmf_subsystem_remove_ns", 00:07:16.307 "nvmf_subsystem_set_ns_ana_group", 00:07:16.307 "nvmf_subsystem_add_ns", 00:07:16.307 "nvmf_subsystem_listener_set_ana_state", 00:07:16.307 "nvmf_discovery_get_referrals", 00:07:16.307 "nvmf_discovery_remove_referral", 00:07:16.307 "nvmf_discovery_add_referral", 00:07:16.307 "nvmf_subsystem_remove_listener", 00:07:16.307 "nvmf_subsystem_add_listener", 00:07:16.307 "nvmf_delete_subsystem", 00:07:16.307 "nvmf_create_subsystem", 00:07:16.307 "nvmf_get_subsystems", 00:07:16.307 "env_dpdk_get_mem_stats", 00:07:16.307 "nbd_get_disks", 00:07:16.307 "nbd_stop_disk", 00:07:16.307 "nbd_start_disk", 00:07:16.307 "ublk_recover_disk", 00:07:16.307 "ublk_get_disks", 00:07:16.307 "ublk_stop_disk", 00:07:16.307 "ublk_start_disk", 00:07:16.307 "ublk_destroy_target", 00:07:16.307 "ublk_create_target", 00:07:16.307 "virtio_blk_create_transport", 00:07:16.307 "virtio_blk_get_transports", 00:07:16.307 "vhost_controller_set_coalescing", 00:07:16.307 "vhost_get_controllers", 00:07:16.307 "vhost_delete_controller", 00:07:16.307 "vhost_create_blk_controller", 00:07:16.307 "vhost_scsi_controller_remove_target", 00:07:16.307 "vhost_scsi_controller_add_target", 00:07:16.307 "vhost_start_scsi_controller", 00:07:16.307 "vhost_create_scsi_controller", 00:07:16.307 "thread_set_cpumask", 00:07:16.307 "scheduler_set_options", 00:07:16.307 "framework_get_governor", 00:07:16.307 "framework_get_scheduler", 00:07:16.307 "framework_set_scheduler", 00:07:16.307 "framework_get_reactors", 00:07:16.307 "thread_get_io_channels", 00:07:16.307 "thread_get_pollers", 00:07:16.307 "thread_get_stats", 00:07:16.307 "framework_monitor_context_switch", 00:07:16.307 "spdk_kill_instance", 00:07:16.307 "log_enable_timestamps", 00:07:16.307 "log_get_flags", 00:07:16.307 "log_clear_flag", 00:07:16.307 "log_set_flag", 00:07:16.307 "log_get_level", 00:07:16.307 "log_set_level", 00:07:16.307 "log_get_print_level", 00:07:16.307 "log_set_print_level", 00:07:16.307 "framework_enable_cpumask_locks", 00:07:16.307 "framework_disable_cpumask_locks", 00:07:16.307 "framework_wait_init", 00:07:16.307 "framework_start_init", 00:07:16.307 "scsi_get_devices", 00:07:16.307 "bdev_get_histogram", 00:07:16.307 "bdev_enable_histogram", 00:07:16.307 "bdev_set_qos_limit", 00:07:16.307 "bdev_set_qd_sampling_period", 00:07:16.307 "bdev_get_bdevs", 00:07:16.307 "bdev_reset_iostat", 00:07:16.307 "bdev_get_iostat", 00:07:16.307 "bdev_examine", 00:07:16.307 "bdev_wait_for_examine", 00:07:16.307 "bdev_set_options", 00:07:16.307 "accel_get_stats", 00:07:16.307 "accel_set_options", 00:07:16.307 "accel_set_driver", 00:07:16.307 "accel_crypto_key_destroy", 00:07:16.307 "accel_crypto_keys_get", 00:07:16.307 "accel_crypto_key_create", 00:07:16.307 "accel_assign_opc", 00:07:16.307 "accel_get_module_info", 00:07:16.307 "accel_get_opc_assignments", 00:07:16.307 "vmd_rescan", 00:07:16.307 "vmd_remove_device", 00:07:16.307 "vmd_enable", 00:07:16.307 "sock_get_default_impl", 00:07:16.307 "sock_set_default_impl", 00:07:16.307 "sock_impl_set_options", 00:07:16.307 "sock_impl_get_options", 00:07:16.307 "iobuf_get_stats", 00:07:16.307 "iobuf_set_options", 00:07:16.307 "keyring_get_keys", 00:07:16.307 "framework_get_pci_devices", 00:07:16.307 "framework_get_config", 00:07:16.307 "framework_get_subsystems", 00:07:16.307 "fsdev_set_opts", 00:07:16.307 "fsdev_get_opts", 00:07:16.307 "trace_get_info", 00:07:16.307 "trace_get_tpoint_group_mask", 00:07:16.307 "trace_disable_tpoint_group", 00:07:16.307 "trace_enable_tpoint_group", 00:07:16.307 "trace_clear_tpoint_mask", 00:07:16.307 "trace_set_tpoint_mask", 00:07:16.307 "notify_get_notifications", 00:07:16.307 "notify_get_types", 00:07:16.307 "spdk_get_version", 00:07:16.307 "rpc_get_methods" 00:07:16.307 ] 00:07:16.307 14:17:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:16.307 14:17:39 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:16.307 14:17:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:16.307 14:17:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:16.307 14:17:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2771324 00:07:16.307 14:17:39 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2771324 ']' 00:07:16.307 14:17:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2771324 00:07:16.307 14:17:39 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:16.307 14:17:39 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.307 14:17:39 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2771324 00:07:16.307 14:17:39 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.307 14:17:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.307 14:17:39 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2771324' 00:07:16.307 killing process with pid 2771324 00:07:16.307 14:17:39 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2771324 00:07:16.307 14:17:39 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2771324 00:07:18.218 00:07:18.218 real 0m3.227s 00:07:18.218 user 0m5.552s 00:07:18.218 sys 0m0.590s 00:07:18.218 14:17:41 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.218 14:17:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:18.218 ************************************ 00:07:18.218 END TEST spdkcli_tcp 00:07:18.218 ************************************ 00:07:18.218 14:17:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:18.218 14:17:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.218 14:17:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.218 14:17:41 -- common/autotest_common.sh@10 -- # set +x 00:07:18.218 ************************************ 00:07:18.218 START TEST dpdk_mem_utility 00:07:18.218 ************************************ 00:07:18.218 14:17:41 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:18.218 * Looking for test storage... 00:07:18.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:18.219 14:17:41 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:18.219 14:17:41 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:18.219 14:17:41 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:07:18.479 14:17:41 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.479 14:17:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:18.479 14:17:41 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.479 14:17:41 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:18.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.479 --rc genhtml_branch_coverage=1 00:07:18.479 --rc genhtml_function_coverage=1 00:07:18.479 --rc genhtml_legend=1 00:07:18.479 --rc geninfo_all_blocks=1 00:07:18.479 --rc geninfo_unexecuted_blocks=1 00:07:18.479 00:07:18.479 ' 00:07:18.479 14:17:41 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:18.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.479 --rc genhtml_branch_coverage=1 00:07:18.479 --rc genhtml_function_coverage=1 00:07:18.479 --rc genhtml_legend=1 00:07:18.479 --rc geninfo_all_blocks=1 00:07:18.479 --rc geninfo_unexecuted_blocks=1 00:07:18.479 00:07:18.479 ' 00:07:18.479 14:17:41 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:18.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.479 --rc genhtml_branch_coverage=1 00:07:18.479 --rc genhtml_function_coverage=1 00:07:18.479 --rc genhtml_legend=1 00:07:18.479 --rc geninfo_all_blocks=1 00:07:18.479 --rc geninfo_unexecuted_blocks=1 00:07:18.479 00:07:18.479 ' 00:07:18.479 14:17:41 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:18.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.479 --rc genhtml_branch_coverage=1 00:07:18.479 --rc genhtml_function_coverage=1 00:07:18.479 --rc genhtml_legend=1 00:07:18.479 --rc geninfo_all_blocks=1 00:07:18.479 --rc geninfo_unexecuted_blocks=1 00:07:18.479 00:07:18.479 ' 00:07:18.479 14:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:18.479 14:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2772031 00:07:18.479 14:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2772031 00:07:18.479 14:17:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:18.480 14:17:41 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2772031 ']' 00:07:18.480 14:17:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.480 14:17:41 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.480 14:17:41 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.480 14:17:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.480 14:17:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:18.480 [2024-10-07 14:17:42.052341] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:18.480 [2024-10-07 14:17:42.052457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2772031 ] 00:07:18.480 [2024-10-07 14:17:42.171558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.740 [2024-10-07 14:17:42.351262] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.311 14:17:42 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.311 14:17:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:19.311 14:17:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:19.311 14:17:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:19.311 14:17:42 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.311 14:17:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:19.311 { 00:07:19.311 "filename": "/tmp/spdk_mem_dump.txt" 00:07:19.311 } 00:07:19.311 14:17:43 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.311 14:17:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:19.573 DPDK memory size 866.000000 MiB in 1 heap(s) 00:07:19.573 1 heaps totaling size 866.000000 MiB 00:07:19.573 size: 866.000000 MiB heap id: 0 00:07:19.573 end heaps---------- 00:07:19.573 9 mempools totaling size 642.649841 MiB 00:07:19.573 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:19.573 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:19.573 size: 92.545471 MiB name: bdev_io_2772031 00:07:19.573 size: 51.011292 MiB name: evtpool_2772031 00:07:19.573 size: 50.003479 MiB name: msgpool_2772031 00:07:19.573 size: 36.509338 MiB name: fsdev_io_2772031 00:07:19.573 size: 21.763794 MiB name: PDU_Pool 00:07:19.573 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:19.573 size: 0.026123 MiB name: Session_Pool 00:07:19.573 end mempools------- 00:07:19.573 6 memzones totaling size 4.142822 MiB 00:07:19.573 size: 1.000366 MiB name: RG_ring_0_2772031 00:07:19.573 size: 1.000366 MiB name: RG_ring_1_2772031 00:07:19.573 size: 1.000366 MiB name: RG_ring_4_2772031 00:07:19.573 size: 1.000366 MiB name: RG_ring_5_2772031 00:07:19.573 size: 0.125366 MiB name: RG_ring_2_2772031 00:07:19.573 size: 0.015991 MiB name: RG_ring_3_2772031 00:07:19.573 end memzones------- 00:07:19.573 14:17:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:19.573 heap id: 0 total size: 866.000000 MiB number of busy elements: 44 number of free elements: 20 00:07:19.573 list of free elements. size: 19.979797 MiB 00:07:19.573 element at address: 0x200000400000 with size: 1.999451 MiB 00:07:19.573 element at address: 0x200000800000 with size: 1.996887 MiB 00:07:19.573 element at address: 0x200009600000 with size: 1.995972 MiB 00:07:19.573 element at address: 0x20000d800000 with size: 1.995972 MiB 00:07:19.573 element at address: 0x200007000000 with size: 1.991028 MiB 00:07:19.573 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:07:19.573 element at address: 0x20001c300040 with size: 0.999939 MiB 00:07:19.573 element at address: 0x20001c400000 with size: 0.999329 MiB 00:07:19.573 element at address: 0x200035000000 with size: 0.994324 MiB 00:07:19.573 element at address: 0x20001bc00000 with size: 0.959900 MiB 00:07:19.573 element at address: 0x20001c700040 with size: 0.937256 MiB 00:07:19.573 element at address: 0x200000200000 with size: 0.840942 MiB 00:07:19.573 element at address: 0x20001de00000 with size: 0.583191 MiB 00:07:19.573 element at address: 0x200003e00000 with size: 0.495300 MiB 00:07:19.573 element at address: 0x20001c000000 with size: 0.491150 MiB 00:07:19.573 element at address: 0x20001c800000 with size: 0.485657 MiB 00:07:19.573 element at address: 0x200015e00000 with size: 0.446167 MiB 00:07:19.573 element at address: 0x20002b200000 with size: 0.411072 MiB 00:07:19.573 element at address: 0x200003a00000 with size: 0.355286 MiB 00:07:19.573 element at address: 0x20000d7ff040 with size: 0.001038 MiB 00:07:19.573 list of standard malloc elements. size: 199.221497 MiB 00:07:19.573 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:07:19.573 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:07:19.573 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:07:19.573 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:07:19.573 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:07:19.573 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:19.573 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:07:19.573 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:19.573 element at address: 0x200015dff040 with size: 0.000427 MiB 00:07:19.573 element at address: 0x200015dffa00 with size: 0.000366 MiB 00:07:19.573 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:07:19.573 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:07:19.573 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:07:19.573 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:07:19.573 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:07:19.573 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:19.573 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:19.573 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:07:19.573 element at address: 0x200003a7f4c0 with size: 0.000244 MiB 00:07:19.573 element at address: 0x200003aff800 with size: 0.000244 MiB 00:07:19.573 element at address: 0x200003affa80 with size: 0.000244 MiB 00:07:19.573 element at address: 0x200003efef00 with size: 0.000244 MiB 00:07:19.573 element at address: 0x200003eff000 with size: 0.000244 MiB 00:07:19.573 element at address: 0x20000d7ff480 with size: 0.000244 MiB 00:07:19.573 element at address: 0x20000d7ff580 with size: 0.000244 MiB 00:07:19.573 element at address: 0x20000d7ff680 with size: 0.000244 MiB 00:07:19.573 element at address: 0x20000d7ff780 with size: 0.000244 MiB 00:07:19.573 element at address: 0x20000d7ff880 with size: 0.000244 MiB 00:07:19.573 element at address: 0x20000d7ff980 with size: 0.000244 MiB 00:07:19.573 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:07:19.573 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:07:19.573 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:07:19.573 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:07:19.574 element at address: 0x200015dff200 with size: 0.000244 MiB 00:07:19.574 element at address: 0x200015dff300 with size: 0.000244 MiB 00:07:19.574 element at address: 0x200015dff400 with size: 0.000244 MiB 00:07:19.574 element at address: 0x200015dff500 with size: 0.000244 MiB 00:07:19.574 element at address: 0x200015dff600 with size: 0.000244 MiB 00:07:19.574 element at address: 0x200015dff700 with size: 0.000244 MiB 00:07:19.574 element at address: 0x200015dff800 with size: 0.000244 MiB 00:07:19.574 element at address: 0x200015dff900 with size: 0.000244 MiB 00:07:19.574 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:07:19.574 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:07:19.574 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:07:19.574 list of memzone associated elements. size: 646.798706 MiB 00:07:19.574 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:07:19.574 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:19.574 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:07:19.574 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:19.574 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:07:19.574 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2772031_0 00:07:19.574 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:07:19.574 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2772031_0 00:07:19.574 element at address: 0x200003fff340 with size: 48.003113 MiB 00:07:19.574 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2772031_0 00:07:19.574 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:07:19.574 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2772031_0 00:07:19.574 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:07:19.574 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:19.574 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:07:19.574 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:19.574 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:07:19.574 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2772031 00:07:19.574 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:07:19.574 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2772031 00:07:19.574 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:19.574 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2772031 00:07:19.574 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:07:19.574 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:19.574 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:07:19.574 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:19.574 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:07:19.574 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:19.574 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:07:19.574 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:19.574 element at address: 0x200003eff100 with size: 1.000549 MiB 00:07:19.574 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2772031 00:07:19.574 element at address: 0x200003affb80 with size: 1.000549 MiB 00:07:19.574 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2772031 00:07:19.574 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:07:19.574 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2772031 00:07:19.574 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:07:19.574 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2772031 00:07:19.574 element at address: 0x200003a7f5c0 with size: 0.500549 MiB 00:07:19.574 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2772031 00:07:19.574 element at address: 0x200003e7ecc0 with size: 0.500549 MiB 00:07:19.574 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2772031 00:07:19.574 element at address: 0x20001c07dbc0 with size: 0.500549 MiB 00:07:19.574 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:19.574 element at address: 0x200015e72380 with size: 0.500549 MiB 00:07:19.574 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:19.574 element at address: 0x20001c87c540 with size: 0.250549 MiB 00:07:19.574 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:19.574 element at address: 0x200003a5f180 with size: 0.125549 MiB 00:07:19.574 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2772031 00:07:19.574 element at address: 0x20001bcf5bc0 with size: 0.031799 MiB 00:07:19.574 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:19.574 element at address: 0x20002b2693c0 with size: 0.023804 MiB 00:07:19.574 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:19.574 element at address: 0x200003a5af40 with size: 0.016174 MiB 00:07:19.574 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2772031 00:07:19.574 element at address: 0x20002b26f540 with size: 0.002502 MiB 00:07:19.574 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:19.574 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:07:19.574 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2772031 00:07:19.574 element at address: 0x200003aff900 with size: 0.000366 MiB 00:07:19.574 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2772031 00:07:19.574 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:07:19.574 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2772031 00:07:19.574 element at address: 0x20000d7ffa80 with size: 0.000366 MiB 00:07:19.574 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:19.574 14:17:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:19.574 14:17:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2772031 00:07:19.574 14:17:43 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2772031 ']' 00:07:19.574 14:17:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2772031 00:07:19.574 14:17:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:19.574 14:17:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.574 14:17:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2772031 00:07:19.574 14:17:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.574 14:17:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.574 14:17:43 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2772031' 00:07:19.574 killing process with pid 2772031 00:07:19.574 14:17:43 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2772031 00:07:19.574 14:17:43 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2772031 00:07:21.489 00:07:21.489 real 0m3.111s 00:07:21.489 user 0m3.064s 00:07:21.489 sys 0m0.534s 00:07:21.489 14:17:44 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.489 14:17:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:21.489 ************************************ 00:07:21.489 END TEST dpdk_mem_utility 00:07:21.489 ************************************ 00:07:21.489 14:17:44 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:21.489 14:17:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.489 14:17:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.489 14:17:44 -- common/autotest_common.sh@10 -- # set +x 00:07:21.489 ************************************ 00:07:21.489 START TEST event 00:07:21.489 ************************************ 00:07:21.489 14:17:44 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:21.489 * Looking for test storage... 00:07:21.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:21.489 14:17:45 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:21.489 14:17:45 event -- common/autotest_common.sh@1681 -- # lcov --version 00:07:21.489 14:17:45 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:21.489 14:17:45 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:21.489 14:17:45 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.489 14:17:45 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.489 14:17:45 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.489 14:17:45 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.489 14:17:45 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.489 14:17:45 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.489 14:17:45 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.489 14:17:45 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.489 14:17:45 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.489 14:17:45 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.489 14:17:45 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.489 14:17:45 event -- scripts/common.sh@344 -- # case "$op" in 00:07:21.489 14:17:45 event -- scripts/common.sh@345 -- # : 1 00:07:21.489 14:17:45 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.489 14:17:45 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.489 14:17:45 event -- scripts/common.sh@365 -- # decimal 1 00:07:21.489 14:17:45 event -- scripts/common.sh@353 -- # local d=1 00:07:21.489 14:17:45 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.489 14:17:45 event -- scripts/common.sh@355 -- # echo 1 00:07:21.489 14:17:45 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.489 14:17:45 event -- scripts/common.sh@366 -- # decimal 2 00:07:21.489 14:17:45 event -- scripts/common.sh@353 -- # local d=2 00:07:21.489 14:17:45 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.489 14:17:45 event -- scripts/common.sh@355 -- # echo 2 00:07:21.489 14:17:45 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.489 14:17:45 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.489 14:17:45 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.489 14:17:45 event -- scripts/common.sh@368 -- # return 0 00:07:21.489 14:17:45 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.489 14:17:45 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:21.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.489 --rc genhtml_branch_coverage=1 00:07:21.489 --rc genhtml_function_coverage=1 00:07:21.489 --rc genhtml_legend=1 00:07:21.489 --rc geninfo_all_blocks=1 00:07:21.489 --rc geninfo_unexecuted_blocks=1 00:07:21.489 00:07:21.489 ' 00:07:21.489 14:17:45 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:21.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.489 --rc genhtml_branch_coverage=1 00:07:21.489 --rc genhtml_function_coverage=1 00:07:21.489 --rc genhtml_legend=1 00:07:21.489 --rc geninfo_all_blocks=1 00:07:21.489 --rc geninfo_unexecuted_blocks=1 00:07:21.489 00:07:21.489 ' 00:07:21.489 14:17:45 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:21.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.489 --rc genhtml_branch_coverage=1 00:07:21.489 --rc genhtml_function_coverage=1 00:07:21.489 --rc genhtml_legend=1 00:07:21.489 --rc geninfo_all_blocks=1 00:07:21.489 --rc geninfo_unexecuted_blocks=1 00:07:21.489 00:07:21.489 ' 00:07:21.489 14:17:45 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:21.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.489 --rc genhtml_branch_coverage=1 00:07:21.489 --rc genhtml_function_coverage=1 00:07:21.489 --rc genhtml_legend=1 00:07:21.489 --rc geninfo_all_blocks=1 00:07:21.489 --rc geninfo_unexecuted_blocks=1 00:07:21.489 00:07:21.489 ' 00:07:21.489 14:17:45 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:21.489 14:17:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:21.489 14:17:45 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:21.489 14:17:45 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:21.489 14:17:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.489 14:17:45 event -- common/autotest_common.sh@10 -- # set +x 00:07:21.489 ************************************ 00:07:21.489 START TEST event_perf 00:07:21.489 ************************************ 00:07:21.489 14:17:45 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:21.750 Running I/O for 1 seconds...[2024-10-07 14:17:45.234283] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:21.750 [2024-10-07 14:17:45.234387] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2772670 ] 00:07:21.750 [2024-10-07 14:17:45.366880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.010 [2024-10-07 14:17:45.552064] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.010 [2024-10-07 14:17:45.552194] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.010 [2024-10-07 14:17:45.552399] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.010 [2024-10-07 14:17:45.552423] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.395 Running I/O for 1 seconds... 00:07:23.395 lcore 0: 189276 00:07:23.395 lcore 1: 189274 00:07:23.395 lcore 2: 189273 00:07:23.395 lcore 3: 189276 00:07:23.395 done. 00:07:23.395 00:07:23.395 real 0m1.656s 00:07:23.395 user 0m4.488s 00:07:23.395 sys 0m0.164s 00:07:23.395 14:17:46 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.395 14:17:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:23.395 ************************************ 00:07:23.395 END TEST event_perf 00:07:23.395 ************************************ 00:07:23.395 14:17:46 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:23.395 14:17:46 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:23.395 14:17:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.395 14:17:46 event -- common/autotest_common.sh@10 -- # set +x 00:07:23.395 ************************************ 00:07:23.395 START TEST event_reactor 00:07:23.395 ************************************ 00:07:23.395 14:17:46 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:23.395 [2024-10-07 14:17:46.962791] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:23.395 [2024-10-07 14:17:46.962895] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2773053 ] 00:07:23.395 [2024-10-07 14:17:47.088676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.655 [2024-10-07 14:17:47.269492] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.039 test_start 00:07:25.039 oneshot 00:07:25.039 tick 100 00:07:25.039 tick 100 00:07:25.039 tick 250 00:07:25.039 tick 100 00:07:25.039 tick 100 00:07:25.039 tick 250 00:07:25.039 tick 100 00:07:25.039 tick 500 00:07:25.039 tick 100 00:07:25.039 tick 100 00:07:25.039 tick 250 00:07:25.039 tick 100 00:07:25.039 tick 100 00:07:25.039 test_end 00:07:25.039 00:07:25.039 real 0m1.635s 00:07:25.039 user 0m1.491s 00:07:25.039 sys 0m0.137s 00:07:25.039 14:17:48 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.039 14:17:48 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:25.039 ************************************ 00:07:25.039 END TEST event_reactor 00:07:25.039 ************************************ 00:07:25.039 14:17:48 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:25.039 14:17:48 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:25.039 14:17:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.039 14:17:48 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.039 ************************************ 00:07:25.039 START TEST event_reactor_perf 00:07:25.039 ************************************ 00:07:25.039 14:17:48 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:25.039 [2024-10-07 14:17:48.674776] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:25.039 [2024-10-07 14:17:48.674880] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2773428 ] 00:07:25.300 [2024-10-07 14:17:48.795911] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.300 [2024-10-07 14:17:48.975190] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.684 test_start 00:07:26.684 test_end 00:07:26.684 Performance: 296381 events per second 00:07:26.684 00:07:26.684 real 0m1.632s 00:07:26.684 user 0m1.490s 00:07:26.684 sys 0m0.135s 00:07:26.684 14:17:50 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.684 14:17:50 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:26.684 ************************************ 00:07:26.684 END TEST event_reactor_perf 00:07:26.684 ************************************ 00:07:26.684 14:17:50 event -- event/event.sh@49 -- # uname -s 00:07:26.684 14:17:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:26.684 14:17:50 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:26.684 14:17:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.684 14:17:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.684 14:17:50 event -- common/autotest_common.sh@10 -- # set +x 00:07:26.684 ************************************ 00:07:26.684 START TEST event_scheduler 00:07:26.684 ************************************ 00:07:26.684 14:17:50 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:26.945 * Looking for test storage... 00:07:26.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:26.945 14:17:50 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:26.945 14:17:50 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:07:26.945 14:17:50 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:26.945 14:17:50 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.945 14:17:50 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:26.945 14:17:50 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.945 14:17:50 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:26.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.945 --rc genhtml_branch_coverage=1 00:07:26.945 --rc genhtml_function_coverage=1 00:07:26.945 --rc genhtml_legend=1 00:07:26.945 --rc geninfo_all_blocks=1 00:07:26.945 --rc geninfo_unexecuted_blocks=1 00:07:26.945 00:07:26.945 ' 00:07:26.945 14:17:50 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:26.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.945 --rc genhtml_branch_coverage=1 00:07:26.945 --rc genhtml_function_coverage=1 00:07:26.945 --rc genhtml_legend=1 00:07:26.945 --rc geninfo_all_blocks=1 00:07:26.945 --rc geninfo_unexecuted_blocks=1 00:07:26.945 00:07:26.945 ' 00:07:26.945 14:17:50 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:26.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.945 --rc genhtml_branch_coverage=1 00:07:26.945 --rc genhtml_function_coverage=1 00:07:26.945 --rc genhtml_legend=1 00:07:26.945 --rc geninfo_all_blocks=1 00:07:26.945 --rc geninfo_unexecuted_blocks=1 00:07:26.945 00:07:26.945 ' 00:07:26.945 14:17:50 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:26.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.945 --rc genhtml_branch_coverage=1 00:07:26.945 --rc genhtml_function_coverage=1 00:07:26.945 --rc genhtml_legend=1 00:07:26.945 --rc geninfo_all_blocks=1 00:07:26.945 --rc geninfo_unexecuted_blocks=1 00:07:26.945 00:07:26.945 ' 00:07:26.945 14:17:50 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:26.945 14:17:50 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2773979 00:07:26.945 14:17:50 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:26.945 14:17:50 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2773979 00:07:26.945 14:17:50 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:26.945 14:17:50 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2773979 ']' 00:07:26.945 14:17:50 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.945 14:17:50 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.945 14:17:50 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.945 14:17:50 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.945 14:17:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:26.945 [2024-10-07 14:17:50.620629] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:26.945 [2024-10-07 14:17:50.620743] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2773979 ] 00:07:27.206 [2024-10-07 14:17:50.724411] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.206 [2024-10-07 14:17:50.862577] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.206 [2024-10-07 14:17:50.862735] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.206 [2024-10-07 14:17:50.862828] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.206 [2024-10-07 14:17:50.862855] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.777 14:17:51 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.777 14:17:51 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:27.777 14:17:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:27.777 14:17:51 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.777 14:17:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:27.777 [2024-10-07 14:17:51.396813] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:27.777 [2024-10-07 14:17:51.396835] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:27.777 [2024-10-07 14:17:51.396851] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:27.777 [2024-10-07 14:17:51.396859] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:27.777 [2024-10-07 14:17:51.396871] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:27.777 14:17:51 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.777 14:17:51 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:27.777 14:17:51 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.777 14:17:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:28.039 [2024-10-07 14:17:51.576302] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:28.039 14:17:51 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.039 14:17:51 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:28.039 14:17:51 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.039 14:17:51 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.039 14:17:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:28.039 ************************************ 00:07:28.039 START TEST scheduler_create_thread 00:07:28.039 ************************************ 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.039 2 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.039 3 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.039 4 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.039 5 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.039 6 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.039 7 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.039 8 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:28.039 9 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.039 14:17:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.424 10 00:07:29.424 14:17:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.424 14:17:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:29.424 14:17:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.424 14:17:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.365 14:17:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.365 14:17:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:30.365 14:17:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:30.365 14:17:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.365 14:17:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.935 14:17:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.935 14:17:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:30.935 14:17:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.935 14:17:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.878 14:17:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.878 14:17:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:31.878 14:17:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:31.878 14:17:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.878 14:17:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.451 14:17:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.451 00:07:32.451 real 0m4.267s 00:07:32.451 user 0m0.027s 00:07:32.451 sys 0m0.005s 00:07:32.451 14:17:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.451 14:17:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.451 ************************************ 00:07:32.451 END TEST scheduler_create_thread 00:07:32.451 ************************************ 00:07:32.451 14:17:55 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:32.451 14:17:55 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2773979 00:07:32.451 14:17:55 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2773979 ']' 00:07:32.451 14:17:55 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2773979 00:07:32.451 14:17:55 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:32.451 14:17:55 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:32.451 14:17:55 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2773979 00:07:32.451 14:17:55 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:32.451 14:17:55 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:32.451 14:17:55 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2773979' 00:07:32.451 killing process with pid 2773979 00:07:32.451 14:17:55 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2773979 00:07:32.451 14:17:55 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2773979 00:07:32.711 [2024-10-07 14:17:56.214235] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:33.283 00:07:33.283 real 0m6.578s 00:07:33.283 user 0m14.758s 00:07:33.283 sys 0m0.513s 00:07:33.283 14:17:56 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.283 14:17:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:33.283 ************************************ 00:07:33.283 END TEST event_scheduler 00:07:33.283 ************************************ 00:07:33.283 14:17:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:33.283 14:17:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:33.283 14:17:56 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.283 14:17:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.283 14:17:56 event -- common/autotest_common.sh@10 -- # set +x 00:07:33.544 ************************************ 00:07:33.544 START TEST app_repeat 00:07:33.544 ************************************ 00:07:33.544 14:17:56 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:33.544 14:17:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.544 14:17:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.544 14:17:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:33.544 14:17:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:33.544 14:17:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:33.544 14:17:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:33.544 14:17:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:33.544 14:17:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2775237 00:07:33.544 14:17:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:33.544 14:17:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2775237' 00:07:33.544 Process app_repeat pid: 2775237 00:07:33.544 14:17:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:33.544 14:17:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:33.544 spdk_app_start Round 0 00:07:33.544 14:17:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2775237 /var/tmp/spdk-nbd.sock 00:07:33.544 14:17:57 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2775237 ']' 00:07:33.544 14:17:57 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:33.544 14:17:57 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.544 14:17:57 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:33.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:33.544 14:17:57 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.544 14:17:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:33.544 14:17:57 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:33.544 [2024-10-07 14:17:57.054969] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:33.544 [2024-10-07 14:17:57.055087] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2775237 ] 00:07:33.544 [2024-10-07 14:17:57.181401] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.806 [2024-10-07 14:17:57.364410] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.806 [2024-10-07 14:17:57.364434] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.377 14:17:57 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.377 14:17:57 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:34.377 14:17:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:34.377 Malloc0 00:07:34.377 14:17:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:34.638 Malloc1 00:07:34.638 14:17:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:34.638 14:17:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.638 14:17:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:34.638 14:17:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:34.638 14:17:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.638 14:17:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:34.638 14:17:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:34.638 14:17:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.639 14:17:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:34.639 14:17:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:34.639 14:17:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.639 14:17:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:34.639 14:17:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:34.639 14:17:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:34.639 14:17:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.639 14:17:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:34.899 /dev/nbd0 00:07:34.899 14:17:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:34.899 14:17:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:34.899 14:17:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:34.899 14:17:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:34.899 14:17:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:34.899 14:17:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:34.899 14:17:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:34.899 14:17:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:34.899 14:17:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:34.899 14:17:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:34.899 14:17:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:34.899 1+0 records in 00:07:34.899 1+0 records out 00:07:34.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201222 s, 20.4 MB/s 00:07:34.899 14:17:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:34.899 14:17:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:34.899 14:17:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:34.899 14:17:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:34.899 14:17:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:34.899 14:17:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.899 14:17:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.899 14:17:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:35.161 /dev/nbd1 00:07:35.161 14:17:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:35.161 14:17:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:35.161 14:17:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:35.161 14:17:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:35.161 14:17:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:35.161 14:17:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:35.161 14:17:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:35.161 14:17:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:35.161 14:17:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:35.161 14:17:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:35.161 14:17:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:35.161 1+0 records in 00:07:35.161 1+0 records out 00:07:35.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353036 s, 11.6 MB/s 00:07:35.161 14:17:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:35.161 14:17:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:35.161 14:17:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:35.161 14:17:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:35.161 14:17:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:35.161 14:17:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:35.161 14:17:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:35.161 14:17:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:35.161 14:17:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.161 14:17:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:35.423 { 00:07:35.423 "nbd_device": "/dev/nbd0", 00:07:35.423 "bdev_name": "Malloc0" 00:07:35.423 }, 00:07:35.423 { 00:07:35.423 "nbd_device": "/dev/nbd1", 00:07:35.423 "bdev_name": "Malloc1" 00:07:35.423 } 00:07:35.423 ]' 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:35.423 { 00:07:35.423 "nbd_device": "/dev/nbd0", 00:07:35.423 "bdev_name": "Malloc0" 00:07:35.423 }, 00:07:35.423 { 00:07:35.423 "nbd_device": "/dev/nbd1", 00:07:35.423 "bdev_name": "Malloc1" 00:07:35.423 } 00:07:35.423 ]' 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:35.423 /dev/nbd1' 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:35.423 /dev/nbd1' 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:35.423 256+0 records in 00:07:35.423 256+0 records out 00:07:35.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432199 s, 243 MB/s 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:35.423 256+0 records in 00:07:35.423 256+0 records out 00:07:35.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122152 s, 85.8 MB/s 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:35.423 14:17:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:35.423 256+0 records in 00:07:35.423 256+0 records out 00:07:35.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145185 s, 72.2 MB/s 00:07:35.423 14:17:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:35.423 14:17:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.423 14:17:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:35.423 14:17:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:35.423 14:17:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:35.423 14:17:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:35.423 14:17:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:35.423 14:17:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.423 14:17:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:35.424 14:17:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.424 14:17:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:35.424 14:17:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:35.424 14:17:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:35.424 14:17:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.424 14:17:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.424 14:17:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:35.424 14:17:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:35.424 14:17:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.424 14:17:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:35.685 14:17:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:35.685 14:17:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:35.685 14:17:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:35.685 14:17:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.685 14:17:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.685 14:17:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:35.685 14:17:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:35.685 14:17:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.685 14:17:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.685 14:17:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:35.685 14:17:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:35.685 14:17:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:35.685 14:17:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:35.685 14:17:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.685 14:17:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.685 14:17:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:35.946 14:17:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:35.946 14:17:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.946 14:17:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:35.946 14:17:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.946 14:17:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:35.946 14:17:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:35.946 14:17:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:35.946 14:17:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:35.946 14:17:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:35.946 14:17:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:35.946 14:17:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:35.946 14:17:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:35.946 14:17:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:35.946 14:17:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:35.946 14:17:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:35.946 14:17:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:35.946 14:17:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:35.946 14:17:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:36.207 14:17:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:37.150 [2024-10-07 14:18:00.843425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:37.410 [2024-10-07 14:18:01.017939] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.410 [2024-10-07 14:18:01.017941] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.671 [2024-10-07 14:18:01.156367] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:37.671 [2024-10-07 14:18:01.156416] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:39.586 14:18:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:39.586 14:18:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:39.586 spdk_app_start Round 1 00:07:39.586 14:18:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2775237 /var/tmp/spdk-nbd.sock 00:07:39.586 14:18:02 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2775237 ']' 00:07:39.586 14:18:02 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:39.586 14:18:02 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.586 14:18:02 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:39.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:39.586 14:18:02 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.586 14:18:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:39.586 14:18:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.586 14:18:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:39.586 14:18:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:39.586 Malloc0 00:07:39.586 14:18:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:39.847 Malloc1 00:07:39.847 14:18:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:39.847 14:18:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.847 14:18:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:39.847 14:18:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:39.847 14:18:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.847 14:18:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:39.847 14:18:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:39.847 14:18:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.847 14:18:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:39.847 14:18:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:39.847 14:18:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.847 14:18:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:39.847 14:18:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:39.847 14:18:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:39.847 14:18:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:39.847 14:18:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:40.109 /dev/nbd0 00:07:40.109 14:18:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:40.109 14:18:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:40.109 14:18:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:40.109 14:18:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:40.109 14:18:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:40.109 14:18:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:40.109 14:18:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:40.109 14:18:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:40.109 14:18:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:40.109 14:18:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:40.109 14:18:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:40.109 1+0 records in 00:07:40.110 1+0 records out 00:07:40.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234521 s, 17.5 MB/s 00:07:40.110 14:18:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:40.110 14:18:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:40.110 14:18:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:40.110 14:18:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:40.110 14:18:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:40.110 14:18:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.110 14:18:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:40.110 14:18:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:40.370 /dev/nbd1 00:07:40.370 14:18:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:40.370 14:18:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:40.370 14:18:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:40.370 14:18:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:40.370 14:18:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:40.370 14:18:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:40.370 14:18:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:40.370 14:18:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:40.370 14:18:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:40.370 14:18:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:40.370 14:18:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:40.370 1+0 records in 00:07:40.370 1+0 records out 00:07:40.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280733 s, 14.6 MB/s 00:07:40.370 14:18:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:40.370 14:18:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:40.370 14:18:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:40.370 14:18:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:40.370 14:18:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:40.370 14:18:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.370 14:18:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:40.370 14:18:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:40.371 14:18:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.371 14:18:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:40.631 { 00:07:40.631 "nbd_device": "/dev/nbd0", 00:07:40.631 "bdev_name": "Malloc0" 00:07:40.631 }, 00:07:40.631 { 00:07:40.631 "nbd_device": "/dev/nbd1", 00:07:40.631 "bdev_name": "Malloc1" 00:07:40.631 } 00:07:40.631 ]' 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:40.631 { 00:07:40.631 "nbd_device": "/dev/nbd0", 00:07:40.631 "bdev_name": "Malloc0" 00:07:40.631 }, 00:07:40.631 { 00:07:40.631 "nbd_device": "/dev/nbd1", 00:07:40.631 "bdev_name": "Malloc1" 00:07:40.631 } 00:07:40.631 ]' 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:40.631 /dev/nbd1' 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:40.631 /dev/nbd1' 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:40.631 256+0 records in 00:07:40.631 256+0 records out 00:07:40.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124198 s, 84.4 MB/s 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:40.631 256+0 records in 00:07:40.631 256+0 records out 00:07:40.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195311 s, 53.7 MB/s 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:40.631 256+0 records in 00:07:40.631 256+0 records out 00:07:40.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221592 s, 47.3 MB/s 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.631 14:18:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:40.632 14:18:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:40.632 14:18:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:40.632 14:18:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:40.632 14:18:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:40.632 14:18:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:40.632 14:18:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:40.632 14:18:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:40.632 14:18:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:40.632 14:18:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:40.632 14:18:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:40.632 14:18:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.632 14:18:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.632 14:18:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:40.632 14:18:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:40.632 14:18:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.632 14:18:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:40.893 14:18:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:40.893 14:18:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:40.893 14:18:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:40.893 14:18:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:40.893 14:18:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:40.893 14:18:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:40.893 14:18:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:40.893 14:18:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:40.893 14:18:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.893 14:18:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:41.153 14:18:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:41.153 14:18:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:41.725 14:18:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:42.668 [2024-10-07 14:18:06.072874] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:42.668 [2024-10-07 14:18:06.241326] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.668 [2024-10-07 14:18:06.241401] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.928 [2024-10-07 14:18:06.379657] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:42.928 [2024-10-07 14:18:06.379705] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:44.839 14:18:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:44.839 14:18:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:44.839 spdk_app_start Round 2 00:07:44.839 14:18:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2775237 /var/tmp/spdk-nbd.sock 00:07:44.839 14:18:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2775237 ']' 00:07:44.839 14:18:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:44.839 14:18:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.839 14:18:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:44.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:44.839 14:18:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.839 14:18:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:44.839 14:18:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.839 14:18:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:44.839 14:18:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:44.839 Malloc0 00:07:44.839 14:18:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:45.099 Malloc1 00:07:45.099 14:18:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:45.099 14:18:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.099 14:18:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:45.099 14:18:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:45.099 14:18:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.099 14:18:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:45.099 14:18:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:45.099 14:18:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.099 14:18:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:45.099 14:18:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:45.099 14:18:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.099 14:18:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:45.099 14:18:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:45.099 14:18:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:45.099 14:18:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:45.099 14:18:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:45.359 /dev/nbd0 00:07:45.359 14:18:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:45.359 14:18:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:45.359 14:18:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:45.359 14:18:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:45.359 14:18:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.359 14:18:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.359 14:18:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:45.359 14:18:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:45.359 14:18:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.359 14:18:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.359 14:18:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:45.359 1+0 records in 00:07:45.359 1+0 records out 00:07:45.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358853 s, 11.4 MB/s 00:07:45.359 14:18:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:45.359 14:18:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:45.359 14:18:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:45.359 14:18:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.359 14:18:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:45.359 14:18:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.359 14:18:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:45.359 14:18:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:45.618 /dev/nbd1 00:07:45.618 14:18:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:45.618 14:18:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:45.618 14:18:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:45.618 14:18:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:45.618 14:18:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.618 14:18:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.618 14:18:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:45.618 14:18:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:45.618 14:18:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.618 14:18:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.618 14:18:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:45.618 1+0 records in 00:07:45.618 1+0 records out 00:07:45.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225755 s, 18.1 MB/s 00:07:45.618 14:18:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:45.618 14:18:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:45.618 14:18:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:45.618 14:18:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.618 14:18:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:45.618 14:18:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.618 14:18:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:45.618 14:18:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:45.618 14:18:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.618 14:18:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:45.877 14:18:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:45.877 { 00:07:45.877 "nbd_device": "/dev/nbd0", 00:07:45.877 "bdev_name": "Malloc0" 00:07:45.877 }, 00:07:45.877 { 00:07:45.877 "nbd_device": "/dev/nbd1", 00:07:45.877 "bdev_name": "Malloc1" 00:07:45.877 } 00:07:45.877 ]' 00:07:45.877 14:18:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:45.877 { 00:07:45.877 "nbd_device": "/dev/nbd0", 00:07:45.877 "bdev_name": "Malloc0" 00:07:45.877 }, 00:07:45.877 { 00:07:45.877 "nbd_device": "/dev/nbd1", 00:07:45.877 "bdev_name": "Malloc1" 00:07:45.877 } 00:07:45.877 ]' 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:45.878 /dev/nbd1' 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:45.878 /dev/nbd1' 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:45.878 256+0 records in 00:07:45.878 256+0 records out 00:07:45.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012736 s, 82.3 MB/s 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:45.878 256+0 records in 00:07:45.878 256+0 records out 00:07:45.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019472 s, 53.9 MB/s 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:45.878 256+0 records in 00:07:45.878 256+0 records out 00:07:45.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227379 s, 46.1 MB/s 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.878 14:18:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:46.137 14:18:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:46.137 14:18:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:46.137 14:18:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:46.137 14:18:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.137 14:18:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.137 14:18:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:46.137 14:18:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:46.137 14:18:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.137 14:18:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.137 14:18:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:46.397 14:18:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:46.397 14:18:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:46.397 14:18:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:46.397 14:18:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.397 14:18:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.397 14:18:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:46.397 14:18:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:46.397 14:18:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.397 14:18:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:46.397 14:18:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.397 14:18:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.397 14:18:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:46.397 14:18:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:46.397 14:18:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:46.397 14:18:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:46.397 14:18:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:46.397 14:18:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:46.397 14:18:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:46.397 14:18:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:46.656 14:18:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:46.656 14:18:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:46.656 14:18:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:46.656 14:18:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:46.656 14:18:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:46.916 14:18:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:47.855 [2024-10-07 14:18:11.326756] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:47.855 [2024-10-07 14:18:11.499955] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.855 [2024-10-07 14:18:11.499958] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.114 [2024-10-07 14:18:11.638155] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:48.114 [2024-10-07 14:18:11.638198] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:50.025 14:18:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2775237 /var/tmp/spdk-nbd.sock 00:07:50.025 14:18:13 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2775237 ']' 00:07:50.025 14:18:13 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:50.025 14:18:13 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.025 14:18:13 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:50.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:50.025 14:18:13 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.025 14:18:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:50.025 14:18:13 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.025 14:18:13 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:50.025 14:18:13 event.app_repeat -- event/event.sh@39 -- # killprocess 2775237 00:07:50.025 14:18:13 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2775237 ']' 00:07:50.025 14:18:13 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2775237 00:07:50.025 14:18:13 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:50.025 14:18:13 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:50.025 14:18:13 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2775237 00:07:50.025 14:18:13 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:50.025 14:18:13 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:50.025 14:18:13 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2775237' 00:07:50.025 killing process with pid 2775237 00:07:50.025 14:18:13 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2775237 00:07:50.025 14:18:13 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2775237 00:07:50.966 spdk_app_start is called in Round 0. 00:07:50.966 Shutdown signal received, stop current app iteration 00:07:50.966 Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 reinitialization... 00:07:50.966 spdk_app_start is called in Round 1. 00:07:50.966 Shutdown signal received, stop current app iteration 00:07:50.966 Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 reinitialization... 00:07:50.966 spdk_app_start is called in Round 2. 00:07:50.966 Shutdown signal received, stop current app iteration 00:07:50.966 Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 reinitialization... 00:07:50.966 spdk_app_start is called in Round 3. 00:07:50.966 Shutdown signal received, stop current app iteration 00:07:50.966 14:18:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:50.966 14:18:14 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:50.966 00:07:50.966 real 0m17.444s 00:07:50.966 user 0m35.930s 00:07:50.966 sys 0m2.402s 00:07:50.966 14:18:14 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.966 14:18:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:50.966 ************************************ 00:07:50.966 END TEST app_repeat 00:07:50.966 ************************************ 00:07:50.966 14:18:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:50.966 14:18:14 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:50.966 14:18:14 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.966 14:18:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.966 14:18:14 event -- common/autotest_common.sh@10 -- # set +x 00:07:50.966 ************************************ 00:07:50.966 START TEST cpu_locks 00:07:50.966 ************************************ 00:07:50.966 14:18:14 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:50.966 * Looking for test storage... 00:07:50.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:50.966 14:18:14 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:50.966 14:18:14 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:50.966 14:18:14 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:51.228 14:18:14 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:51.228 14:18:14 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.228 14:18:14 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.228 14:18:14 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.228 14:18:14 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.228 14:18:14 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.228 14:18:14 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.228 14:18:14 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.228 14:18:14 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.228 14:18:14 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.229 14:18:14 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:51.229 14:18:14 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.229 14:18:14 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:51.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.229 --rc genhtml_branch_coverage=1 00:07:51.229 --rc genhtml_function_coverage=1 00:07:51.229 --rc genhtml_legend=1 00:07:51.229 --rc geninfo_all_blocks=1 00:07:51.229 --rc geninfo_unexecuted_blocks=1 00:07:51.229 00:07:51.229 ' 00:07:51.229 14:18:14 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:51.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.229 --rc genhtml_branch_coverage=1 00:07:51.229 --rc genhtml_function_coverage=1 00:07:51.229 --rc genhtml_legend=1 00:07:51.229 --rc geninfo_all_blocks=1 00:07:51.229 --rc geninfo_unexecuted_blocks=1 00:07:51.229 00:07:51.229 ' 00:07:51.229 14:18:14 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:51.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.229 --rc genhtml_branch_coverage=1 00:07:51.229 --rc genhtml_function_coverage=1 00:07:51.229 --rc genhtml_legend=1 00:07:51.229 --rc geninfo_all_blocks=1 00:07:51.229 --rc geninfo_unexecuted_blocks=1 00:07:51.229 00:07:51.229 ' 00:07:51.229 14:18:14 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:51.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.229 --rc genhtml_branch_coverage=1 00:07:51.229 --rc genhtml_function_coverage=1 00:07:51.229 --rc genhtml_legend=1 00:07:51.229 --rc geninfo_all_blocks=1 00:07:51.229 --rc geninfo_unexecuted_blocks=1 00:07:51.229 00:07:51.229 ' 00:07:51.229 14:18:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:51.229 14:18:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:51.229 14:18:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:51.229 14:18:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:51.229 14:18:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.229 14:18:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.229 14:18:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.229 ************************************ 00:07:51.229 START TEST default_locks 00:07:51.229 ************************************ 00:07:51.229 14:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:51.229 14:18:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2779624 00:07:51.229 14:18:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2779624 00:07:51.229 14:18:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:51.229 14:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2779624 ']' 00:07:51.229 14:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.229 14:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.229 14:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.229 14:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.229 14:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.229 [2024-10-07 14:18:14.842115] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:51.229 [2024-10-07 14:18:14.842227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2779624 ] 00:07:51.490 [2024-10-07 14:18:14.960536] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.490 [2024-10-07 14:18:15.138780] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.434 14:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.434 14:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:52.434 14:18:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2779624 00:07:52.434 14:18:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2779624 00:07:52.434 14:18:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:52.695 lslocks: write error 00:07:52.695 14:18:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2779624 00:07:52.695 14:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2779624 ']' 00:07:52.695 14:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2779624 00:07:52.695 14:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:52.695 14:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.695 14:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2779624 00:07:52.956 14:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.956 14:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.956 14:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2779624' 00:07:52.956 killing process with pid 2779624 00:07:52.956 14:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2779624 00:07:52.956 14:18:16 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2779624 00:07:54.878 14:18:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2779624 00:07:54.878 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:54.878 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2779624 00:07:54.878 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:54.878 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.878 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:54.878 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.878 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2779624 00:07:54.878 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2779624 ']' 00:07:54.878 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.878 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.878 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.878 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.878 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:54.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2779624) - No such process 00:07:54.879 ERROR: process (pid: 2779624) is no longer running 00:07:54.879 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.879 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:54.879 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:54.879 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:54.879 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:54.879 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:54.879 14:18:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:54.879 14:18:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:54.879 14:18:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:54.879 14:18:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:54.879 00:07:54.879 real 0m3.438s 00:07:54.879 user 0m3.404s 00:07:54.879 sys 0m0.763s 00:07:54.879 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.879 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:54.879 ************************************ 00:07:54.879 END TEST default_locks 00:07:54.879 ************************************ 00:07:54.879 14:18:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:54.879 14:18:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.879 14:18:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.879 14:18:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:54.879 ************************************ 00:07:54.879 START TEST default_locks_via_rpc 00:07:54.879 ************************************ 00:07:54.879 14:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:54.879 14:18:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2780334 00:07:54.879 14:18:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2780334 00:07:54.879 14:18:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:54.879 14:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2780334 ']' 00:07:54.879 14:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.879 14:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.879 14:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.879 14:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.879 14:18:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.879 [2024-10-07 14:18:18.348964] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:54.879 [2024-10-07 14:18:18.349082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780334 ] 00:07:54.879 [2024-10-07 14:18:18.466862] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.139 [2024-10-07 14:18:18.643826] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.710 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.710 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:55.710 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:55.710 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.710 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.710 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.710 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:55.710 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:55.710 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:55.710 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:55.710 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:55.710 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.710 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.710 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.710 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2780334 00:07:55.710 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2780334 00:07:55.710 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:56.281 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2780334 00:07:56.281 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2780334 ']' 00:07:56.281 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2780334 00:07:56.281 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:56.281 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.281 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2780334 00:07:56.281 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.281 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.281 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2780334' 00:07:56.281 killing process with pid 2780334 00:07:56.281 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2780334 00:07:56.281 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2780334 00:07:58.193 00:07:58.193 real 0m3.382s 00:07:58.193 user 0m3.382s 00:07:58.193 sys 0m0.719s 00:07:58.193 14:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.193 14:18:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.193 ************************************ 00:07:58.193 END TEST default_locks_via_rpc 00:07:58.193 ************************************ 00:07:58.193 14:18:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:58.193 14:18:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.193 14:18:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.193 14:18:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.193 ************************************ 00:07:58.193 START TEST non_locking_app_on_locked_coremask 00:07:58.193 ************************************ 00:07:58.193 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:58.193 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2781036 00:07:58.193 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2781036 /var/tmp/spdk.sock 00:07:58.193 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:58.193 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2781036 ']' 00:07:58.193 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.193 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:58.193 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.193 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:58.193 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.193 [2024-10-07 14:18:21.811106] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:58.193 [2024-10-07 14:18:21.811225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2781036 ] 00:07:58.454 [2024-10-07 14:18:21.930581] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.454 [2024-10-07 14:18:22.108880] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.394 14:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.394 14:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:59.394 14:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2781369 00:07:59.394 14:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:59.394 14:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2781369 /var/tmp/spdk2.sock 00:07:59.394 14:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2781369 ']' 00:07:59.394 14:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:59.394 14:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.394 14:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:59.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:59.394 14:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.394 14:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:59.394 [2024-10-07 14:18:22.827286] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:59.394 [2024-10-07 14:18:22.827400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2781369 ] 00:07:59.394 [2024-10-07 14:18:22.990722] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:59.394 [2024-10-07 14:18:22.990771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.654 [2024-10-07 14:18:23.349354] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.566 14:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.566 14:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:01.566 14:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2781036 00:08:01.827 14:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2781036 00:08:01.827 14:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:02.419 lslocks: write error 00:08:02.419 14:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2781036 00:08:02.419 14:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2781036 ']' 00:08:02.419 14:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2781036 00:08:02.419 14:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:02.419 14:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.419 14:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2781036 00:08:02.419 14:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.419 14:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.419 14:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2781036' 00:08:02.419 killing process with pid 2781036 00:08:02.419 14:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2781036 00:08:02.419 14:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2781036 00:08:05.928 14:18:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2781369 00:08:05.928 14:18:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2781369 ']' 00:08:05.928 14:18:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2781369 00:08:05.928 14:18:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:05.928 14:18:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:05.928 14:18:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2781369 00:08:05.928 14:18:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:05.928 14:18:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:05.928 14:18:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2781369' 00:08:05.928 killing process with pid 2781369 00:08:05.928 14:18:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2781369 00:08:05.928 14:18:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2781369 00:08:07.844 00:08:07.844 real 0m9.415s 00:08:07.844 user 0m9.659s 00:08:07.844 sys 0m1.227s 00:08:07.844 14:18:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.844 14:18:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:07.844 ************************************ 00:08:07.844 END TEST non_locking_app_on_locked_coremask 00:08:07.844 ************************************ 00:08:07.844 14:18:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:07.844 14:18:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.844 14:18:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.844 14:18:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:07.844 ************************************ 00:08:07.844 START TEST locking_app_on_unlocked_coremask 00:08:07.844 ************************************ 00:08:07.844 14:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:08:07.844 14:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2783092 00:08:07.844 14:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2783092 /var/tmp/spdk.sock 00:08:07.844 14:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:07.844 14:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2783092 ']' 00:08:07.844 14:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.844 14:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:07.844 14:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.844 14:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:07.844 14:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:07.844 [2024-10-07 14:18:31.301795] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:07.844 [2024-10-07 14:18:31.301923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2783092 ] 00:08:07.844 [2024-10-07 14:18:31.434299] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:07.844 [2024-10-07 14:18:31.434350] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.105 [2024-10-07 14:18:31.614781] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.676 14:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.676 14:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:08.676 14:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2783123 00:08:08.676 14:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2783123 /var/tmp/spdk2.sock 00:08:08.676 14:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2783123 ']' 00:08:08.676 14:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:08.676 14:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:08.676 14:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.676 14:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:08.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:08.676 14:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.676 14:18:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:08.676 [2024-10-07 14:18:32.354279] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:08.676 [2024-10-07 14:18:32.354396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2783123 ] 00:08:08.937 [2024-10-07 14:18:32.521234] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.198 [2024-10-07 14:18:32.876385] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.112 14:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.112 14:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:11.112 14:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2783123 00:08:11.112 14:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:11.112 14:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2783123 00:08:11.682 lslocks: write error 00:08:11.682 14:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2783092 00:08:11.683 14:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2783092 ']' 00:08:11.683 14:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2783092 00:08:11.683 14:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:11.683 14:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:11.683 14:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2783092 00:08:11.944 14:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:11.944 14:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:11.944 14:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2783092' 00:08:11.944 killing process with pid 2783092 00:08:11.944 14:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2783092 00:08:11.944 14:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2783092 00:08:15.244 14:18:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2783123 00:08:15.244 14:18:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2783123 ']' 00:08:15.244 14:18:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2783123 00:08:15.244 14:18:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:15.244 14:18:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:15.244 14:18:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2783123 00:08:15.244 14:18:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:15.244 14:18:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:15.244 14:18:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2783123' 00:08:15.244 killing process with pid 2783123 00:08:15.244 14:18:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2783123 00:08:15.244 14:18:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2783123 00:08:17.158 00:08:17.158 real 0m9.439s 00:08:17.158 user 0m9.703s 00:08:17.158 sys 0m1.246s 00:08:17.158 14:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.158 14:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:17.158 ************************************ 00:08:17.158 END TEST locking_app_on_unlocked_coremask 00:08:17.158 ************************************ 00:08:17.158 14:18:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:17.158 14:18:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:17.158 14:18:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.158 14:18:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:17.158 ************************************ 00:08:17.158 START TEST locking_app_on_locked_coremask 00:08:17.158 ************************************ 00:08:17.158 14:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:08:17.158 14:18:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2784849 00:08:17.158 14:18:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2784849 /var/tmp/spdk.sock 00:08:17.158 14:18:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:17.158 14:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2784849 ']' 00:08:17.158 14:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.158 14:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.158 14:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.158 14:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.158 14:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:17.158 [2024-10-07 14:18:40.816245] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:17.158 [2024-10-07 14:18:40.816367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2784849 ] 00:08:17.420 [2024-10-07 14:18:40.944162] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.681 [2024-10-07 14:18:41.130360] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.252 14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.252 14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:18.252 14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2785156 00:08:18.252 14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2785156 /var/tmp/spdk2.sock 00:08:18.252 14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:18.252 14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:18.252 14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2785156 /var/tmp/spdk2.sock 00:08:18.252 14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:18.252 14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.253 14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:18.253 14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.253 14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2785156 /var/tmp/spdk2.sock 00:08:18.253 14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2785156 ']' 00:08:18.253 14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:18.253 14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.253 14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:18.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:18.253 14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.253 14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:18.253 [2024-10-07 14:18:41.857924] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:18.253 [2024-10-07 14:18:41.858044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2785156 ] 00:08:18.513 [2024-10-07 14:18:42.022615] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2784849 has claimed it. 00:08:18.513 [2024-10-07 14:18:42.022672] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:18.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2785156) - No such process 00:08:18.774 ERROR: process (pid: 2785156) is no longer running 00:08:18.774 14:18:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.774 14:18:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:18.774 14:18:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:18.774 14:18:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.774 14:18:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:18.774 14:18:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.774 14:18:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2784849 00:08:18.774 14:18:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2784849 00:08:18.774 14:18:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:19.347 lslocks: write error 00:08:19.347 14:18:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2784849 00:08:19.347 14:18:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2784849 ']' 00:08:19.347 14:18:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2784849 00:08:19.347 14:18:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:19.347 14:18:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.347 14:18:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2784849 00:08:19.347 14:18:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:19.347 14:18:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:19.347 14:18:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2784849' 00:08:19.347 killing process with pid 2784849 00:08:19.347 14:18:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2784849 00:08:19.347 14:18:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2784849 00:08:21.262 00:08:21.262 real 0m3.993s 00:08:21.262 user 0m4.128s 00:08:21.262 sys 0m0.856s 00:08:21.262 14:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.262 14:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:21.262 ************************************ 00:08:21.262 END TEST locking_app_on_locked_coremask 00:08:21.262 ************************************ 00:08:21.262 14:18:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:21.262 14:18:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.262 14:18:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.262 14:18:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:21.262 ************************************ 00:08:21.262 START TEST locking_overlapped_coremask 00:08:21.262 ************************************ 00:08:21.262 14:18:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:21.262 14:18:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2785855 00:08:21.262 14:18:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2785855 /var/tmp/spdk.sock 00:08:21.262 14:18:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:21.262 14:18:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2785855 ']' 00:08:21.262 14:18:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.262 14:18:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.262 14:18:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.262 14:18:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.262 14:18:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:21.262 [2024-10-07 14:18:44.880313] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:21.262 [2024-10-07 14:18:44.880426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2785855 ] 00:08:21.523 [2024-10-07 14:18:45.006226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:21.523 [2024-10-07 14:18:45.190090] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.523 [2024-10-07 14:18:45.190367] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.523 [2024-10-07 14:18:45.190367] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.466 14:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.466 14:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:22.466 14:18:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2785904 00:08:22.466 14:18:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2785904 /var/tmp/spdk2.sock 00:08:22.466 14:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:22.466 14:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2785904 /var/tmp/spdk2.sock 00:08:22.466 14:18:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:22.466 14:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:22.466 14:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.466 14:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:22.466 14:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.466 14:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2785904 /var/tmp/spdk2.sock 00:08:22.466 14:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2785904 ']' 00:08:22.466 14:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:22.466 14:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.466 14:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:22.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:22.466 14:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.466 14:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:22.466 [2024-10-07 14:18:45.922577] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:22.466 [2024-10-07 14:18:45.922687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2785904 ] 00:08:22.466 [2024-10-07 14:18:46.065032] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2785855 has claimed it. 00:08:22.466 [2024-10-07 14:18:46.065080] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:23.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2785904) - No such process 00:08:23.038 ERROR: process (pid: 2785904) is no longer running 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2785855 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2785855 ']' 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2785855 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2785855 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:23.038 14:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2785855' 00:08:23.038 killing process with pid 2785855 00:08:23.039 14:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2785855 00:08:23.039 14:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2785855 00:08:24.954 00:08:24.954 real 0m3.495s 00:08:24.954 user 0m9.160s 00:08:24.954 sys 0m0.587s 00:08:24.954 14:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.954 14:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:24.954 ************************************ 00:08:24.954 END TEST locking_overlapped_coremask 00:08:24.954 ************************************ 00:08:24.954 14:18:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:24.954 14:18:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.954 14:18:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.954 14:18:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:24.954 ************************************ 00:08:24.954 START TEST locking_overlapped_coremask_via_rpc 00:08:24.954 ************************************ 00:08:24.954 14:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:24.954 14:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2786570 00:08:24.954 14:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2786570 /var/tmp/spdk.sock 00:08:24.954 14:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:24.954 14:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2786570 ']' 00:08:24.954 14:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.954 14:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.954 14:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.954 14:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.954 14:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.954 [2024-10-07 14:18:48.451557] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:24.954 [2024-10-07 14:18:48.451668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786570 ] 00:08:24.954 [2024-10-07 14:18:48.571821] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:24.954 [2024-10-07 14:18:48.571864] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:25.215 [2024-10-07 14:18:48.756924] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.215 [2024-10-07 14:18:48.757012] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.215 [2024-10-07 14:18:48.757014] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.787 14:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:25.787 14:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:25.787 14:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2786728 00:08:25.787 14:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2786728 /var/tmp/spdk2.sock 00:08:25.787 14:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2786728 ']' 00:08:25.787 14:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:25.787 14:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:25.787 14:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.787 14:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:25.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:25.787 14:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.787 14:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.787 [2024-10-07 14:18:49.490326] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:25.787 [2024-10-07 14:18:49.490428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786728 ] 00:08:26.049 [2024-10-07 14:18:49.633710] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:26.049 [2024-10-07 14:18:49.633754] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:26.310 [2024-10-07 14:18:49.908769] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.310 [2024-10-07 14:18:49.908867] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.310 [2024-10-07 14:18:49.908896] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.253 [2024-10-07 14:18:50.864112] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2786570 has claimed it. 00:08:27.253 request: 00:08:27.253 { 00:08:27.253 "method": "framework_enable_cpumask_locks", 00:08:27.253 "req_id": 1 00:08:27.253 } 00:08:27.253 Got JSON-RPC error response 00:08:27.253 response: 00:08:27.253 { 00:08:27.253 "code": -32603, 00:08:27.253 "message": "Failed to claim CPU core: 2" 00:08:27.253 } 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2786570 /var/tmp/spdk.sock 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2786570 ']' 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.253 14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.514 14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.514 14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:27.514 14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2786728 /var/tmp/spdk2.sock 00:08:27.514 14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2786728 ']' 00:08:27.514 14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:27.514 14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.514 14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:27.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:27.514 14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.514 14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.775 14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.775 14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:27.775 14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:27.775 14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:27.775 14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:27.775 14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:27.775 00:08:27.775 real 0m2.878s 00:08:27.775 user 0m0.874s 00:08:27.775 sys 0m0.157s 00:08:27.775 14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.775 14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.775 ************************************ 00:08:27.775 END TEST locking_overlapped_coremask_via_rpc 00:08:27.775 ************************************ 00:08:27.775 14:18:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:27.775 14:18:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2786570 ]] 00:08:27.775 14:18:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2786570 00:08:27.775 14:18:51 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2786570 ']' 00:08:27.775 14:18:51 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2786570 00:08:27.775 14:18:51 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:27.775 14:18:51 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.775 14:18:51 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2786570 00:08:27.775 14:18:51 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:27.775 14:18:51 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:27.775 14:18:51 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2786570' 00:08:27.775 killing process with pid 2786570 00:08:27.775 14:18:51 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2786570 00:08:27.775 14:18:51 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2786570 00:08:29.689 14:18:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2786728 ]] 00:08:29.689 14:18:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2786728 00:08:29.689 14:18:53 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2786728 ']' 00:08:29.689 14:18:53 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2786728 00:08:29.689 14:18:53 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:29.689 14:18:53 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.689 14:18:53 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2786728 00:08:29.689 14:18:53 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:29.689 14:18:53 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:29.689 14:18:53 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2786728' 00:08:29.689 killing process with pid 2786728 00:08:29.689 14:18:53 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2786728 00:08:29.689 14:18:53 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2786728 00:08:31.075 14:18:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:31.075 14:18:54 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:31.075 14:18:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2786570 ]] 00:08:31.075 14:18:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2786570 00:08:31.075 14:18:54 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2786570 ']' 00:08:31.075 14:18:54 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2786570 00:08:31.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2786570) - No such process 00:08:31.075 14:18:54 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2786570 is not found' 00:08:31.075 Process with pid 2786570 is not found 00:08:31.075 14:18:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2786728 ]] 00:08:31.075 14:18:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2786728 00:08:31.075 14:18:54 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2786728 ']' 00:08:31.075 14:18:54 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2786728 00:08:31.075 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2786728) - No such process 00:08:31.075 14:18:54 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2786728 is not found' 00:08:31.075 Process with pid 2786728 is not found 00:08:31.075 14:18:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:31.075 00:08:31.075 real 0m39.854s 00:08:31.075 user 1m2.852s 00:08:31.075 sys 0m6.731s 00:08:31.075 14:18:54 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.075 14:18:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:31.075 ************************************ 00:08:31.075 END TEST cpu_locks 00:08:31.075 ************************************ 00:08:31.075 00:08:31.075 real 1m9.467s 00:08:31.075 user 2m1.292s 00:08:31.075 sys 0m10.502s 00:08:31.075 14:18:54 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.075 14:18:54 event -- common/autotest_common.sh@10 -- # set +x 00:08:31.075 ************************************ 00:08:31.075 END TEST event 00:08:31.075 ************************************ 00:08:31.075 14:18:54 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:31.075 14:18:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:31.075 14:18:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.075 14:18:54 -- common/autotest_common.sh@10 -- # set +x 00:08:31.075 ************************************ 00:08:31.075 START TEST thread 00:08:31.075 ************************************ 00:08:31.076 14:18:54 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:31.076 * Looking for test storage... 00:08:31.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:31.076 14:18:54 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:31.076 14:18:54 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:08:31.076 14:18:54 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:31.076 14:18:54 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:31.076 14:18:54 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.076 14:18:54 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.076 14:18:54 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.076 14:18:54 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.076 14:18:54 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.076 14:18:54 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.076 14:18:54 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.076 14:18:54 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.076 14:18:54 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.076 14:18:54 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.076 14:18:54 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.076 14:18:54 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:31.076 14:18:54 thread -- scripts/common.sh@345 -- # : 1 00:08:31.076 14:18:54 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.076 14:18:54 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.076 14:18:54 thread -- scripts/common.sh@365 -- # decimal 1 00:08:31.076 14:18:54 thread -- scripts/common.sh@353 -- # local d=1 00:08:31.076 14:18:54 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.076 14:18:54 thread -- scripts/common.sh@355 -- # echo 1 00:08:31.076 14:18:54 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.076 14:18:54 thread -- scripts/common.sh@366 -- # decimal 2 00:08:31.076 14:18:54 thread -- scripts/common.sh@353 -- # local d=2 00:08:31.076 14:18:54 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.076 14:18:54 thread -- scripts/common.sh@355 -- # echo 2 00:08:31.076 14:18:54 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.076 14:18:54 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.076 14:18:54 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.076 14:18:54 thread -- scripts/common.sh@368 -- # return 0 00:08:31.076 14:18:54 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.076 14:18:54 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:31.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.076 --rc genhtml_branch_coverage=1 00:08:31.076 --rc genhtml_function_coverage=1 00:08:31.076 --rc genhtml_legend=1 00:08:31.076 --rc geninfo_all_blocks=1 00:08:31.076 --rc geninfo_unexecuted_blocks=1 00:08:31.076 00:08:31.076 ' 00:08:31.076 14:18:54 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:31.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.076 --rc genhtml_branch_coverage=1 00:08:31.076 --rc genhtml_function_coverage=1 00:08:31.076 --rc genhtml_legend=1 00:08:31.076 --rc geninfo_all_blocks=1 00:08:31.076 --rc geninfo_unexecuted_blocks=1 00:08:31.076 00:08:31.076 ' 00:08:31.076 14:18:54 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:31.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.076 --rc genhtml_branch_coverage=1 00:08:31.076 --rc genhtml_function_coverage=1 00:08:31.076 --rc genhtml_legend=1 00:08:31.076 --rc geninfo_all_blocks=1 00:08:31.076 --rc geninfo_unexecuted_blocks=1 00:08:31.076 00:08:31.076 ' 00:08:31.076 14:18:54 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:31.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.076 --rc genhtml_branch_coverage=1 00:08:31.076 --rc genhtml_function_coverage=1 00:08:31.076 --rc genhtml_legend=1 00:08:31.076 --rc geninfo_all_blocks=1 00:08:31.076 --rc geninfo_unexecuted_blocks=1 00:08:31.076 00:08:31.076 ' 00:08:31.076 14:18:54 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:31.076 14:18:54 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:31.076 14:18:54 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.076 14:18:54 thread -- common/autotest_common.sh@10 -- # set +x 00:08:31.076 ************************************ 00:08:31.076 START TEST thread_poller_perf 00:08:31.076 ************************************ 00:08:31.076 14:18:54 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:31.076 [2024-10-07 14:18:54.779983] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:31.076 [2024-10-07 14:18:54.780101] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2787940 ] 00:08:31.337 [2024-10-07 14:18:54.908270] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.600 [2024-10-07 14:18:55.087111] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.600 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:32.985 [2024-10-07T12:18:56.694Z] ====================================== 00:08:32.985 [2024-10-07T12:18:56.694Z] busy:2408141470 (cyc) 00:08:32.985 [2024-10-07T12:18:56.694Z] total_run_count: 283000 00:08:32.985 [2024-10-07T12:18:56.694Z] tsc_hz: 2400000000 (cyc) 00:08:32.985 [2024-10-07T12:18:56.694Z] ====================================== 00:08:32.985 [2024-10-07T12:18:56.694Z] poller_cost: 8509 (cyc), 3545 (nsec) 00:08:32.985 00:08:32.985 real 0m1.644s 00:08:32.985 user 0m1.476s 00:08:32.985 sys 0m0.161s 00:08:32.985 14:18:56 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.985 14:18:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:32.985 ************************************ 00:08:32.985 END TEST thread_poller_perf 00:08:32.985 ************************************ 00:08:32.985 14:18:56 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:32.985 14:18:56 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:32.985 14:18:56 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.985 14:18:56 thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.985 ************************************ 00:08:32.985 START TEST thread_poller_perf 00:08:32.985 ************************************ 00:08:32.985 14:18:56 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:32.985 [2024-10-07 14:18:56.497227] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:32.985 [2024-10-07 14:18:56.497337] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788311 ] 00:08:32.985 [2024-10-07 14:18:56.623823] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.245 [2024-10-07 14:18:56.803492] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.245 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:34.629 [2024-10-07T12:18:58.338Z] ====================================== 00:08:34.629 [2024-10-07T12:18:58.338Z] busy:2403426190 (cyc) 00:08:34.629 [2024-10-07T12:18:58.338Z] total_run_count: 3656000 00:08:34.629 [2024-10-07T12:18:58.338Z] tsc_hz: 2400000000 (cyc) 00:08:34.629 [2024-10-07T12:18:58.338Z] ====================================== 00:08:34.629 [2024-10-07T12:18:58.338Z] poller_cost: 657 (cyc), 273 (nsec) 00:08:34.629 00:08:34.629 real 0m1.634s 00:08:34.629 user 0m1.479s 00:08:34.629 sys 0m0.150s 00:08:34.629 14:18:58 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.629 14:18:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:34.629 ************************************ 00:08:34.629 END TEST thread_poller_perf 00:08:34.629 ************************************ 00:08:34.629 14:18:58 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:34.629 00:08:34.629 real 0m3.638s 00:08:34.629 user 0m3.135s 00:08:34.629 sys 0m0.513s 00:08:34.629 14:18:58 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.629 14:18:58 thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.629 ************************************ 00:08:34.629 END TEST thread 00:08:34.629 ************************************ 00:08:34.629 14:18:58 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:34.629 14:18:58 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:34.629 14:18:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.629 14:18:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.629 14:18:58 -- common/autotest_common.sh@10 -- # set +x 00:08:34.629 ************************************ 00:08:34.629 START TEST app_cmdline 00:08:34.629 ************************************ 00:08:34.629 14:18:58 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:34.629 * Looking for test storage... 00:08:34.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:34.629 14:18:58 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:34.629 14:18:58 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:08:34.629 14:18:58 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:34.890 14:18:58 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.890 14:18:58 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:34.890 14:18:58 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.890 14:18:58 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:34.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.890 --rc genhtml_branch_coverage=1 00:08:34.890 --rc genhtml_function_coverage=1 00:08:34.890 --rc genhtml_legend=1 00:08:34.890 --rc geninfo_all_blocks=1 00:08:34.890 --rc geninfo_unexecuted_blocks=1 00:08:34.890 00:08:34.890 ' 00:08:34.890 14:18:58 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:34.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.890 --rc genhtml_branch_coverage=1 00:08:34.890 --rc genhtml_function_coverage=1 00:08:34.890 --rc genhtml_legend=1 00:08:34.890 --rc geninfo_all_blocks=1 00:08:34.890 --rc geninfo_unexecuted_blocks=1 00:08:34.890 00:08:34.890 ' 00:08:34.890 14:18:58 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:34.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.890 --rc genhtml_branch_coverage=1 00:08:34.890 --rc genhtml_function_coverage=1 00:08:34.890 --rc genhtml_legend=1 00:08:34.890 --rc geninfo_all_blocks=1 00:08:34.890 --rc geninfo_unexecuted_blocks=1 00:08:34.890 00:08:34.890 ' 00:08:34.890 14:18:58 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:34.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.890 --rc genhtml_branch_coverage=1 00:08:34.890 --rc genhtml_function_coverage=1 00:08:34.890 --rc genhtml_legend=1 00:08:34.890 --rc geninfo_all_blocks=1 00:08:34.890 --rc geninfo_unexecuted_blocks=1 00:08:34.890 00:08:34.890 ' 00:08:34.890 14:18:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:34.890 14:18:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2788792 00:08:34.890 14:18:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2788792 00:08:34.890 14:18:58 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2788792 ']' 00:08:34.890 14:18:58 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:34.890 14:18:58 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.890 14:18:58 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.890 14:18:58 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.890 14:18:58 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.890 14:18:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:34.890 [2024-10-07 14:18:58.514047] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:34.890 [2024-10-07 14:18:58.514183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788792 ] 00:08:35.150 [2024-10-07 14:18:58.646181] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.150 [2024-10-07 14:18:58.825948] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.093 14:18:59 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.093 14:18:59 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:36.093 14:18:59 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:36.093 { 00:08:36.093 "version": "SPDK v25.01-pre git sha1 3950cd1bb", 00:08:36.093 "fields": { 00:08:36.093 "major": 25, 00:08:36.093 "minor": 1, 00:08:36.093 "patch": 0, 00:08:36.093 "suffix": "-pre", 00:08:36.093 "commit": "3950cd1bb" 00:08:36.093 } 00:08:36.093 } 00:08:36.093 14:18:59 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:36.093 14:18:59 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:36.093 14:18:59 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:36.093 14:18:59 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:36.093 14:18:59 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:36.093 14:18:59 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:36.093 14:18:59 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:36.093 14:18:59 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.093 14:18:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:36.093 14:18:59 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.093 14:18:59 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:36.093 14:18:59 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:36.093 14:18:59 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:36.093 14:18:59 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:36.093 14:18:59 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:36.093 14:18:59 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.093 14:18:59 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.093 14:18:59 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.093 14:18:59 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.093 14:18:59 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.093 14:18:59 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.093 14:18:59 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.093 14:18:59 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:36.093 14:18:59 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:36.355 request: 00:08:36.355 { 00:08:36.355 "method": "env_dpdk_get_mem_stats", 00:08:36.355 "req_id": 1 00:08:36.355 } 00:08:36.355 Got JSON-RPC error response 00:08:36.355 response: 00:08:36.355 { 00:08:36.356 "code": -32601, 00:08:36.356 "message": "Method not found" 00:08:36.356 } 00:08:36.356 14:18:59 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:36.356 14:18:59 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.356 14:18:59 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:36.356 14:18:59 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.356 14:18:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2788792 00:08:36.356 14:18:59 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2788792 ']' 00:08:36.356 14:18:59 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2788792 00:08:36.356 14:18:59 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:36.356 14:18:59 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.356 14:18:59 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2788792 00:08:36.356 14:18:59 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:36.356 14:18:59 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:36.356 14:18:59 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2788792' 00:08:36.356 killing process with pid 2788792 00:08:36.356 14:18:59 app_cmdline -- common/autotest_common.sh@969 -- # kill 2788792 00:08:36.356 14:18:59 app_cmdline -- common/autotest_common.sh@974 -- # wait 2788792 00:08:38.270 00:08:38.270 real 0m3.418s 00:08:38.270 user 0m3.592s 00:08:38.270 sys 0m0.607s 00:08:38.270 14:19:01 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.270 14:19:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:38.270 ************************************ 00:08:38.270 END TEST app_cmdline 00:08:38.270 ************************************ 00:08:38.270 14:19:01 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:38.270 14:19:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.270 14:19:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.270 14:19:01 -- common/autotest_common.sh@10 -- # set +x 00:08:38.270 ************************************ 00:08:38.270 START TEST version 00:08:38.270 ************************************ 00:08:38.270 14:19:01 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:38.270 * Looking for test storage... 00:08:38.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:38.270 14:19:01 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:38.270 14:19:01 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:38.270 14:19:01 version -- common/autotest_common.sh@1681 -- # lcov --version 00:08:38.270 14:19:01 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:38.270 14:19:01 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.270 14:19:01 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.270 14:19:01 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.270 14:19:01 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.270 14:19:01 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.270 14:19:01 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.270 14:19:01 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.270 14:19:01 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.270 14:19:01 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.270 14:19:01 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.270 14:19:01 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.270 14:19:01 version -- scripts/common.sh@344 -- # case "$op" in 00:08:38.270 14:19:01 version -- scripts/common.sh@345 -- # : 1 00:08:38.270 14:19:01 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.270 14:19:01 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.270 14:19:01 version -- scripts/common.sh@365 -- # decimal 1 00:08:38.270 14:19:01 version -- scripts/common.sh@353 -- # local d=1 00:08:38.270 14:19:01 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.270 14:19:01 version -- scripts/common.sh@355 -- # echo 1 00:08:38.270 14:19:01 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.270 14:19:01 version -- scripts/common.sh@366 -- # decimal 2 00:08:38.270 14:19:01 version -- scripts/common.sh@353 -- # local d=2 00:08:38.270 14:19:01 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.270 14:19:01 version -- scripts/common.sh@355 -- # echo 2 00:08:38.270 14:19:01 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.270 14:19:01 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.270 14:19:01 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.270 14:19:01 version -- scripts/common.sh@368 -- # return 0 00:08:38.270 14:19:01 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.270 14:19:01 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:38.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.270 --rc genhtml_branch_coverage=1 00:08:38.270 --rc genhtml_function_coverage=1 00:08:38.270 --rc genhtml_legend=1 00:08:38.270 --rc geninfo_all_blocks=1 00:08:38.270 --rc geninfo_unexecuted_blocks=1 00:08:38.270 00:08:38.270 ' 00:08:38.270 14:19:01 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:38.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.270 --rc genhtml_branch_coverage=1 00:08:38.270 --rc genhtml_function_coverage=1 00:08:38.270 --rc genhtml_legend=1 00:08:38.270 --rc geninfo_all_blocks=1 00:08:38.270 --rc geninfo_unexecuted_blocks=1 00:08:38.270 00:08:38.270 ' 00:08:38.270 14:19:01 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:38.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.270 --rc genhtml_branch_coverage=1 00:08:38.270 --rc genhtml_function_coverage=1 00:08:38.270 --rc genhtml_legend=1 00:08:38.270 --rc geninfo_all_blocks=1 00:08:38.270 --rc geninfo_unexecuted_blocks=1 00:08:38.270 00:08:38.270 ' 00:08:38.270 14:19:01 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:38.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.270 --rc genhtml_branch_coverage=1 00:08:38.270 --rc genhtml_function_coverage=1 00:08:38.270 --rc genhtml_legend=1 00:08:38.270 --rc geninfo_all_blocks=1 00:08:38.270 --rc geninfo_unexecuted_blocks=1 00:08:38.270 00:08:38.270 ' 00:08:38.270 14:19:01 version -- app/version.sh@17 -- # get_header_version major 00:08:38.270 14:19:01 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.270 14:19:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:38.270 14:19:01 version -- app/version.sh@14 -- # cut -f2 00:08:38.270 14:19:01 version -- app/version.sh@17 -- # major=25 00:08:38.270 14:19:01 version -- app/version.sh@18 -- # get_header_version minor 00:08:38.270 14:19:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:38.270 14:19:01 version -- app/version.sh@14 -- # cut -f2 00:08:38.270 14:19:01 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.270 14:19:01 version -- app/version.sh@18 -- # minor=1 00:08:38.270 14:19:01 version -- app/version.sh@19 -- # get_header_version patch 00:08:38.270 14:19:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:38.270 14:19:01 version -- app/version.sh@14 -- # cut -f2 00:08:38.270 14:19:01 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.270 14:19:01 version -- app/version.sh@19 -- # patch=0 00:08:38.270 14:19:01 version -- app/version.sh@20 -- # get_header_version suffix 00:08:38.270 14:19:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:38.270 14:19:01 version -- app/version.sh@14 -- # cut -f2 00:08:38.270 14:19:01 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.270 14:19:01 version -- app/version.sh@20 -- # suffix=-pre 00:08:38.270 14:19:01 version -- app/version.sh@22 -- # version=25.1 00:08:38.270 14:19:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:38.270 14:19:01 version -- app/version.sh@28 -- # version=25.1rc0 00:08:38.270 14:19:01 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:38.270 14:19:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:38.531 14:19:01 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:38.531 14:19:01 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:38.531 00:08:38.531 real 0m0.281s 00:08:38.531 user 0m0.166s 00:08:38.531 sys 0m0.159s 00:08:38.531 14:19:01 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.531 14:19:01 version -- common/autotest_common.sh@10 -- # set +x 00:08:38.531 ************************************ 00:08:38.531 END TEST version 00:08:38.531 ************************************ 00:08:38.531 14:19:02 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:38.531 14:19:02 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:38.531 14:19:02 -- spdk/autotest.sh@194 -- # uname -s 00:08:38.531 14:19:02 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:38.531 14:19:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:38.531 14:19:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:38.531 14:19:02 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:38.531 14:19:02 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:38.531 14:19:02 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:38.531 14:19:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:38.531 14:19:02 -- common/autotest_common.sh@10 -- # set +x 00:08:38.531 14:19:02 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:38.531 14:19:02 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:38.531 14:19:02 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:38.531 14:19:02 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:38.531 14:19:02 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:08:38.531 14:19:02 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:08:38.531 14:19:02 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:38.531 14:19:02 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:38.531 14:19:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.531 14:19:02 -- common/autotest_common.sh@10 -- # set +x 00:08:38.531 ************************************ 00:08:38.531 START TEST nvmf_tcp 00:08:38.531 ************************************ 00:08:38.531 14:19:02 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:38.531 * Looking for test storage... 00:08:38.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:38.531 14:19:02 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:38.531 14:19:02 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:08:38.531 14:19:02 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:38.792 14:19:02 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:38.792 14:19:02 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.792 14:19:02 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.792 14:19:02 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.792 14:19:02 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.792 14:19:02 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.792 14:19:02 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.792 14:19:02 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.792 14:19:02 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.792 14:19:02 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.793 14:19:02 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:38.793 14:19:02 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.793 14:19:02 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:38.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.793 --rc genhtml_branch_coverage=1 00:08:38.793 --rc genhtml_function_coverage=1 00:08:38.793 --rc genhtml_legend=1 00:08:38.793 --rc geninfo_all_blocks=1 00:08:38.793 --rc geninfo_unexecuted_blocks=1 00:08:38.793 00:08:38.793 ' 00:08:38.793 14:19:02 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:38.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.793 --rc genhtml_branch_coverage=1 00:08:38.793 --rc genhtml_function_coverage=1 00:08:38.793 --rc genhtml_legend=1 00:08:38.793 --rc geninfo_all_blocks=1 00:08:38.793 --rc geninfo_unexecuted_blocks=1 00:08:38.793 00:08:38.793 ' 00:08:38.793 14:19:02 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:38.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.793 --rc genhtml_branch_coverage=1 00:08:38.793 --rc genhtml_function_coverage=1 00:08:38.793 --rc genhtml_legend=1 00:08:38.793 --rc geninfo_all_blocks=1 00:08:38.793 --rc geninfo_unexecuted_blocks=1 00:08:38.793 00:08:38.793 ' 00:08:38.793 14:19:02 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:38.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.793 --rc genhtml_branch_coverage=1 00:08:38.793 --rc genhtml_function_coverage=1 00:08:38.793 --rc genhtml_legend=1 00:08:38.793 --rc geninfo_all_blocks=1 00:08:38.793 --rc geninfo_unexecuted_blocks=1 00:08:38.793 00:08:38.793 ' 00:08:38.793 14:19:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:38.793 14:19:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:38.793 14:19:02 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:38.793 14:19:02 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:38.793 14:19:02 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.793 14:19:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:38.793 ************************************ 00:08:38.793 START TEST nvmf_target_core 00:08:38.793 ************************************ 00:08:38.793 14:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:38.793 * Looking for test storage... 00:08:38.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:38.793 14:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:38.793 14:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:08:38.793 14:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:39.054 14:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:39.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.055 --rc genhtml_branch_coverage=1 00:08:39.055 --rc genhtml_function_coverage=1 00:08:39.055 --rc genhtml_legend=1 00:08:39.055 --rc geninfo_all_blocks=1 00:08:39.055 --rc geninfo_unexecuted_blocks=1 00:08:39.055 00:08:39.055 ' 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:39.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.055 --rc genhtml_branch_coverage=1 00:08:39.055 --rc genhtml_function_coverage=1 00:08:39.055 --rc genhtml_legend=1 00:08:39.055 --rc geninfo_all_blocks=1 00:08:39.055 --rc geninfo_unexecuted_blocks=1 00:08:39.055 00:08:39.055 ' 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:39.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.055 --rc genhtml_branch_coverage=1 00:08:39.055 --rc genhtml_function_coverage=1 00:08:39.055 --rc genhtml_legend=1 00:08:39.055 --rc geninfo_all_blocks=1 00:08:39.055 --rc geninfo_unexecuted_blocks=1 00:08:39.055 00:08:39.055 ' 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:39.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.055 --rc genhtml_branch_coverage=1 00:08:39.055 --rc genhtml_function_coverage=1 00:08:39.055 --rc genhtml_legend=1 00:08:39.055 --rc geninfo_all_blocks=1 00:08:39.055 --rc geninfo_unexecuted_blocks=1 00:08:39.055 00:08:39.055 ' 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.055 ************************************ 00:08:39.055 START TEST nvmf_abort 00:08:39.055 ************************************ 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:39.055 * Looking for test storage... 00:08:39.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:39.055 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.317 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:39.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.318 --rc genhtml_branch_coverage=1 00:08:39.318 --rc genhtml_function_coverage=1 00:08:39.318 --rc genhtml_legend=1 00:08:39.318 --rc geninfo_all_blocks=1 00:08:39.318 --rc geninfo_unexecuted_blocks=1 00:08:39.318 00:08:39.318 ' 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:39.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.318 --rc genhtml_branch_coverage=1 00:08:39.318 --rc genhtml_function_coverage=1 00:08:39.318 --rc genhtml_legend=1 00:08:39.318 --rc geninfo_all_blocks=1 00:08:39.318 --rc geninfo_unexecuted_blocks=1 00:08:39.318 00:08:39.318 ' 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:39.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.318 --rc genhtml_branch_coverage=1 00:08:39.318 --rc genhtml_function_coverage=1 00:08:39.318 --rc genhtml_legend=1 00:08:39.318 --rc geninfo_all_blocks=1 00:08:39.318 --rc geninfo_unexecuted_blocks=1 00:08:39.318 00:08:39.318 ' 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:39.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.318 --rc genhtml_branch_coverage=1 00:08:39.318 --rc genhtml_function_coverage=1 00:08:39.318 --rc genhtml_legend=1 00:08:39.318 --rc geninfo_all_blocks=1 00:08:39.318 --rc geninfo_unexecuted_blocks=1 00:08:39.318 00:08:39.318 ' 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:08:39.318 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:47.462 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:47.463 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:47.463 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:47.463 Found net devices under 0000:31:00.0: cvl_0_0 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:47.463 Found net devices under 0000:31:00.1: cvl_0_1 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.463 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:47.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:08:47.463 00:08:47.463 --- 10.0.0.2 ping statistics --- 00:08:47.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.463 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:08:47.463 00:08:47.463 --- 10.0.0.1 ping statistics --- 00:08:47.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.463 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=2793642 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 2793642 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2793642 ']' 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.463 14:19:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.463 [2024-10-07 14:19:10.373189] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:47.463 [2024-10-07 14:19:10.373315] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.463 [2024-10-07 14:19:10.530785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:47.463 [2024-10-07 14:19:10.760008] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.463 [2024-10-07 14:19:10.760082] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.463 [2024-10-07 14:19:10.760095] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.463 [2024-10-07 14:19:10.760108] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.463 [2024-10-07 14:19:10.760119] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.463 [2024-10-07 14:19:10.762106] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.463 [2024-10-07 14:19:10.762300] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.463 [2024-10-07 14:19:10.762324] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.463 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.463 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:08:47.463 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:47.463 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:47.463 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.728 [2024-10-07 14:19:11.191899] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.728 Malloc0 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.728 Delay0 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.728 [2024-10-07 14:19:11.299724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.728 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:47.728 [2024-10-07 14:19:11.409738] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:50.271 Initializing NVMe Controllers 00:08:50.271 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:50.271 controller IO queue size 128 less than required 00:08:50.271 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:50.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:50.271 Initialization complete. Launching workers. 00:08:50.271 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 27386 00:08:50.271 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27447, failed to submit 66 00:08:50.271 success 27386, unsuccessful 61, failed 0 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.271 rmmod nvme_tcp 00:08:50.271 rmmod nvme_fabrics 00:08:50.271 rmmod nvme_keyring 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 2793642 ']' 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 2793642 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2793642 ']' 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2793642 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2793642 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2793642' 00:08:50.271 killing process with pid 2793642 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2793642 00:08:50.271 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2793642 00:08:51.214 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:51.214 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:51.214 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:51.214 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:08:51.214 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:08:51.214 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:51.214 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:08:51.214 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.214 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:51.214 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.214 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.214 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.129 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:53.129 00:08:53.129 real 0m14.190s 00:08:53.129 user 0m15.426s 00:08:53.129 sys 0m6.595s 00:08:53.129 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.129 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:53.129 ************************************ 00:08:53.129 END TEST nvmf_abort 00:08:53.129 ************************************ 00:08:53.391 14:19:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:53.391 14:19:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:53.391 14:19:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.391 14:19:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.391 ************************************ 00:08:53.391 START TEST nvmf_ns_hotplug_stress 00:08:53.391 ************************************ 00:08:53.391 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:53.391 * Looking for test storage... 00:08:53.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.391 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:53.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.391 --rc genhtml_branch_coverage=1 00:08:53.391 --rc genhtml_function_coverage=1 00:08:53.391 --rc genhtml_legend=1 00:08:53.391 --rc geninfo_all_blocks=1 00:08:53.391 --rc geninfo_unexecuted_blocks=1 00:08:53.391 00:08:53.391 ' 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:53.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.391 --rc genhtml_branch_coverage=1 00:08:53.391 --rc genhtml_function_coverage=1 00:08:53.391 --rc genhtml_legend=1 00:08:53.391 --rc geninfo_all_blocks=1 00:08:53.391 --rc geninfo_unexecuted_blocks=1 00:08:53.391 00:08:53.391 ' 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:53.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.391 --rc genhtml_branch_coverage=1 00:08:53.391 --rc genhtml_function_coverage=1 00:08:53.391 --rc genhtml_legend=1 00:08:53.391 --rc geninfo_all_blocks=1 00:08:53.391 --rc geninfo_unexecuted_blocks=1 00:08:53.391 00:08:53.391 ' 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:53.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.391 --rc genhtml_branch_coverage=1 00:08:53.391 --rc genhtml_function_coverage=1 00:08:53.391 --rc genhtml_legend=1 00:08:53.391 --rc geninfo_all_blocks=1 00:08:53.391 --rc geninfo_unexecuted_blocks=1 00:08:53.391 00:08:53.391 ' 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.391 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:08:53.653 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.799 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:01.800 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:01.800 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:01.800 Found net devices under 0000:31:00.0: cvl_0_0 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:01.800 Found net devices under 0000:31:00.1: cvl_0_1 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:01.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:09:01.800 00:09:01.800 --- 10.0.0.2 ping statistics --- 00:09:01.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.800 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:09:01.800 00:09:01.800 --- 10.0.0.1 ping statistics --- 00:09:01.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.800 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.800 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=2798788 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 2798788 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2798788 ']' 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:01.801 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.801 [2024-10-07 14:19:24.659982] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:01.801 [2024-10-07 14:19:24.660093] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.801 [2024-10-07 14:19:24.802237] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:01.801 [2024-10-07 14:19:25.018678] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.801 [2024-10-07 14:19:25.018756] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.801 [2024-10-07 14:19:25.018770] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.801 [2024-10-07 14:19:25.018783] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.801 [2024-10-07 14:19:25.018794] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.801 [2024-10-07 14:19:25.020856] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.801 [2024-10-07 14:19:25.020984] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.801 [2024-10-07 14:19:25.021062] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.801 14:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.801 14:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:09:01.801 14:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:01.801 14:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:01.801 14:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.801 14:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.801 14:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:01.801 14:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:02.062 [2024-10-07 14:19:25.638541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.062 14:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:02.323 14:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.323 [2024-10-07 14:19:26.009558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.584 14:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:02.584 14:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:02.845 Malloc0 00:09:02.845 14:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:03.106 Delay0 00:09:03.106 14:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.106 14:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:03.368 NULL1 00:09:03.368 14:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:03.630 14:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2799275 00:09:03.630 14:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:03.630 14:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:03.630 14:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.630 Read completed with error (sct=0, sc=11) 00:09:03.891 14:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.891 14:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:03.891 14:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:04.152 true 00:09:04.152 14:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:04.152 14:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.094 14:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.094 14:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:05.094 14:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:05.355 true 00:09:05.355 14:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:05.355 14:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.617 14:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.617 14:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:05.617 14:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:05.878 true 00:09:05.878 14:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:05.878 14:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.262 14:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.262 14:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:07.262 14:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:07.262 true 00:09:07.262 14:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:07.262 14:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.204 14:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.466 14:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:08.466 14:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:08.466 true 00:09:08.466 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:08.466 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.727 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.988 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:08.988 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:08.988 true 00:09:08.988 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:08.988 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.249 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.537 [2024-10-07 14:19:32.960899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.960967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.961976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.962998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.963155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.963195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.537 [2024-10-07 14:19:32.963227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.963990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.964987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.965019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.965047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.965075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.965103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.965828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.965863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.965897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.965929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.965964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.965996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.966985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.967023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.538 [2024-10-07 14:19:32.967063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.967937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.968995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.969979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.970015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.970047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.970088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.970120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.970149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.970179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.970562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.970598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.970630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.970662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.970696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.539 [2024-10-07 14:19:32.970731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.970763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.970796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.970828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.970856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.970888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.970922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.970955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.970986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.971991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.972994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.973997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.974031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.974069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.974100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.974143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.974174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.974208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.974239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.974269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.540 [2024-10-07 14:19:32.974327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.974990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.975573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.975608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.975638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.975668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.975698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.975728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.975765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.975803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.975839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.975867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.975899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.975931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.975960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.975993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.976969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.541 [2024-10-07 14:19:32.977732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.977760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.977787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.977823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.977856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.977888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.977920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.977949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.977981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.978983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.979742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.980968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.981006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.981039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.981073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.981104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.981140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.981177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.981209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.981245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.981277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.981308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.981340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.981373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.981408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.981441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.981473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.981502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.981534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.542 [2024-10-07 14:19:32.981574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.981604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.981634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.981667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.981703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.981734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.981761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.981792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.981825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.981857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.981888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.981918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.981949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.981986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.982972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.983970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.984645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.543 [2024-10-07 14:19:32.985696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.985730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.985761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.985803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.985835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.985870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.985902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.985938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.985969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:09.544 [2024-10-07 14:19:32.986506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.986968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.987969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.988994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.989030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.989062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.989094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.989134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.989166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.989202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.989233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.989266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.989301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.989328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.989361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.989391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.989422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.989451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.544 [2024-10-07 14:19:32.989485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.989523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.989553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.990981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.991990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.992995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.993031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.993062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.993095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.993124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.993153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.993184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.993214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.993246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.993278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.993309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.993340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.993391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.993423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.993455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.993490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.993523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.993558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.545 [2024-10-07 14:19:32.993591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.993626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.993665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.993697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.993730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.993767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.993802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.993832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.993865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.993898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.993929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.993961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.993993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.994029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.994061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.994100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.994132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.994172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.994204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.994242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.994277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.994308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.994358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.994390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.994421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:09.546 [2024-10-07 14:19:32.994451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.994484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.994521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:09.546 [2024-10-07 14:19:32.995105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.995970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.996970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.997005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.997036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.997073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.997107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.997140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.997169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.997207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.997237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.546 [2024-10-07 14:19:32.997421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.997455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.997487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.997519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.997555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.997589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.997620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.997654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.997690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.997726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.997758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.997790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.997820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.997855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.997888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.997920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.997953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.997986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.998995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.999035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.999087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.999117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.999155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.999188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.999219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.999255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.999290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.999322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.999352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.999387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.999418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.999449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.999481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:32.999514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.000987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.001026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.001060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.001093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.001124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.001155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.001191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.001224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.001263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.001295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.001325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.001356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.001388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.001426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.001460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.001493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.547 [2024-10-07 14:19:33.001527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.001560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.001611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.001646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.001679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.001713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.001746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.001778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.001815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.001848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.001881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.001917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.001958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.001990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.002975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.003982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.004984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.005022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.005057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.005088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.005118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.005154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.005184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.005216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.005247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.005285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.005317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.005348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.005385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.005416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.548 [2024-10-07 14:19:33.005448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.005481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.005514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.005545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.005576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.005606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.005635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.005669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.005704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.005738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.005773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.005801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.005833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.005864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.005891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.005920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.005948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.005977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.006867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.007996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.008977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.009015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.009047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.009086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.009116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.009148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.009183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.009218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.009252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.009287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.009319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.009359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.009394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.009431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.009461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.549 [2024-10-07 14:19:33.009494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.009524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.009564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.009595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.009625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.009660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.009878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.009913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.009952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.009982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.010998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.011865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.550 [2024-10-07 14:19:33.012903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.012937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.012969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.013989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.014982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.015980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.551 [2024-10-07 14:19:33.016733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.017479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.017517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.017551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.017585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.017618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.017652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.017682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.017711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.017739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.017767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.017795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.017823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.017851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.017879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.017907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.017935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.017962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.017991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.018987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.019993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.020982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.021018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.021053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.552 [2024-10-07 14:19:33.021086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.021992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.022974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.023992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.024027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.024063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.024096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.024125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.024154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.024194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.024244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.024277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.024312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.024343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.024378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.024412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.024852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.024888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.024925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.024959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.024992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.025028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.025060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.025092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:09.553 [2024-10-07 14:19:33.025132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.025163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.025202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.025234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.025268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.025299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.025341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.025376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.553 [2024-10-07 14:19:33.025407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.025442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.025476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.025505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.025538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.025569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.025601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.025637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.025669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.025703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.025733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.025767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.025799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.025831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.025865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.025899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.025931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.025961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.025991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.026974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.027991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.028026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.028060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.028091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.028125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.028156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.028191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.028225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.028257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.028290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.028325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.028358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.028391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.028422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.028457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.028789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.028825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.554 [2024-10-07 14:19:33.028858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.028892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.028925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.028961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.028993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.029961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.030988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.031970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.555 [2024-10-07 14:19:33.032779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.032817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.032847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.032878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.032908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.032941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.032970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.033995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.034992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.035937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.556 [2024-10-07 14:19:33.036758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.036791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.036823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.037978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.038991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.039981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.557 [2024-10-07 14:19:33.040782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.040816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.040847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.040985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.041714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.042975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.043987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.044023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.044056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.044085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.044115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.044150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.044183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.044213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.044241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.044271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.044302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.044333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.044375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.044407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.044438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.044468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.044500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.044532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.558 [2024-10-07 14:19:33.044567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.044599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.044633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.044666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.044697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.044731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.044765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.044796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.044829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.044862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.044918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.044950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.044981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.045787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.046991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.047978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.048017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.048050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.048086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.048116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.048479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.048513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.559 [2024-10-07 14:19:33.048549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.048581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.048609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.048642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.048675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.048708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.048744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.048777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.048809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.048843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.048873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.048905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.048939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.048971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.049998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.050034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.050067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.050097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.050133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.050165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.050197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.050232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.050264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.050297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.050331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.050365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.050398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.050432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.050469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.050498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.050530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.050561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.050595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.051979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.560 [2024-10-07 14:19:33.052569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.052606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.052638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.052673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.052707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.052739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.052773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.052811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.052842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.052873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.052907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.052938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.052972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.053983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.054973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.055992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.056029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.056066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.056098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.056129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.056158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.056189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.056224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.056256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.056289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.056329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.056359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.056648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.056686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.561 [2024-10-07 14:19:33.056716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.056749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.056781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.056813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.056847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.056877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.056910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.056946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.056978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.057998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.058971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.059992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.060031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.060061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.060095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.060150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.060183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.060216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.060252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.060285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.562 [2024-10-07 14:19:33.060316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.060347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.060377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.060414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.060785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.060822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.060857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.060887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.060919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.060951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.060982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.061970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.062864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:09.563 [2024-10-07 14:19:33.063537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.063980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.064019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.064062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.563 [2024-10-07 14:19:33.064093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.064974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.065012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.065046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.065078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.065108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.065138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.065168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.065203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.065237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.065267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.065299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.065333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.066996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.067968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.068003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.068038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.068072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.068225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.068258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.564 [2024-10-07 14:19:33.068290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.068967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.069995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.070041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.070073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.070105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.070143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.070178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.070210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.070264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.070296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.070328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.070366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.070398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.071993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.072030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.072058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.072086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.072114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.072143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.072171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.072199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.072226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.072255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.072283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.072311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.565 [2024-10-07 14:19:33.072338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.072990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.073149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.073184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.073215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.073249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.073281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.073313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.073347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.073379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.073410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.073442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.073475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.073506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.073537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.073568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.073602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.073634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.073665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.074991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.075973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.076012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.076047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.076082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.076114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.076142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.076174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.076207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.076239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.076270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.076301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.076332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.076365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.076394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.076424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.076456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.566 [2024-10-07 14:19:33.076488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.076522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.076558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.076592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.076623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.076656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.076690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.076723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.076753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.076784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.076816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.076852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.076883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.076914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.076946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.076979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.077972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.078976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.567 [2024-10-07 14:19:33.079936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.079964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.079994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.080969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.081978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.082686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.568 [2024-10-07 14:19:33.083896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.083927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.083964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.083998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.084977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.085009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.085038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.085742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.085778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.085812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.085844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.085877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.085912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.085945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.085982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.086970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.087983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.088023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.569 [2024-10-07 14:19:33.088056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.088992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.089985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.090019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.090052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.090507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.090544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.090581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.090615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.090653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.090685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.090717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.090755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.090789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.090823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.090856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.090891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.090922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.090954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.090985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.570 [2024-10-07 14:19:33.091789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.091823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.091858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.091889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.091920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.091955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.091990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.092027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.092061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.092095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.092127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.092158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.092189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.092221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.092256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.092287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.092322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.092353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.092384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.092416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.092451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.092483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.092514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.092551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.093987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.094985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.095023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.095055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.095086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.095116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.095148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.095180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.095211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.095241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.095270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.095304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.095456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.095509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.095540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.095575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.571 [2024-10-07 14:19:33.095606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.095640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.095672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.095703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.095735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.095768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.095805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.095837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.095870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.095905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.095938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.095967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.096983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.097974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.098972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.099009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.099042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.099079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.099114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.099145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.099181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.099213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.099247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.099280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.099313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.099346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.572 [2024-10-07 14:19:33.099377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.099410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.099446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.099480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.099513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.099549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.099585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.099617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.099651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.099683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.099715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.099755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.099789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.099821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.099856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.099887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.099919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.099955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.099990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.100989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.101977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.102989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.103023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.103052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.103081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.103110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.103138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.103167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.103196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.103225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.103253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.573 [2024-10-07 14:19:33.103281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:09.574 [2024-10-07 14:19:33.103827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.103992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.104738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.105973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.574 [2024-10-07 14:19:33.106951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.106982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.107018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.107073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.107108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.107143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.107182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.107216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.107248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.107283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.107648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.107684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.107716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.107751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.107782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.107814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.107870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.107902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.107933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.107970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.108996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.109687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.575 [2024-10-07 14:19:33.110753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.110784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.110815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.110854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.110884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.110916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.110950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.110980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.111985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.112987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.113984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.114021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.114052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.114086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.576 [2024-10-07 14:19:33.114117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.114153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.114184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.114216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.114244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.114272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.114305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.114337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.114366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.114396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.114430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.114464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.114496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.114527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.114558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.114922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.114956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.114996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.115989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.116992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.117026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.117063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.117097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.117457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.117493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.117529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.117560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.117594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.117626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.577 [2024-10-07 14:19:33.117662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.117693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.117727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.117759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.117792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.117831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.117861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.117893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.117928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.117961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.117994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.118979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.119968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.120976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.578 [2024-10-07 14:19:33.121548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.121577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.121606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.121634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.121662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.121690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.121719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.121746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.121775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.121804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.122143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.122179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.122217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.122252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.122284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.122319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.122349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.122381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.122412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.122446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.122479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.122509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.122539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.122572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.122605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.122640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.122674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.122966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.123992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.124027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.124066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.124096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.124127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.124164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.124195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.124229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.579 [2024-10-07 14:19:33.124259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.124293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.124326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.124359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.124390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.124421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.124461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.124493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.124694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.124728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.124762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.124794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.124830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.124865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.124897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.124933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.124971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.125999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.126822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.127114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.127151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.127185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.127215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.127244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.127276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.127311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.127343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.580 [2024-10-07 14:19:33.127377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.127410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.127447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.127480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.127509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.127537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.127567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.127600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.127633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.128987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.129986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.130984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.131019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.131055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.131096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.131136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.581 [2024-10-07 14:19:33.131173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.131991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.132043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.132079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.132113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.132151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.132186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.132219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.132253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.132286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.132317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.132355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.132386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.132417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.132457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.132489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.132520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.132556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.132997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.133976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.582 [2024-10-07 14:19:33.134850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.134881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.134920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.134953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.134985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.135998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.136970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.137008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.137041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.137075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.137108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.137141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.137187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.137220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.137255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.137295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.137327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.137360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.137395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.137428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.137459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.137492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.137522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.137954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.137993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.583 [2024-10-07 14:19:33.138654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.138694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.138727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.138765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.138797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.138829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.138862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.138892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.138956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.138989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.139971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.140975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.141947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.142354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.142391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.142422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.142455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.142487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.142523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.142560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.142593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.142633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.142667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.584 [2024-10-07 14:19:33.142700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.142737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.142769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:09.585 [2024-10-07 14:19:33.142803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.142836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.142867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.142903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.142939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.142973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.143968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.144972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.145008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.145045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.145076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.145109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.145146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.145177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.145209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.145243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.145279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.145312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.145344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.145376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.145407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.145445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.145479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.145510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.145544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.585 [2024-10-07 14:19:33.145578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.145610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.145647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.145678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.145710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.145738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.145769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.145801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.145831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.145860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.145898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.145931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.145961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.145991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.146029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.146063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.146097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.146132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.146165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.146196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.146227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.146265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.146299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.146330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.146364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.146400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.147975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.148992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.586 [2024-10-07 14:19:33.149800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.149833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.149866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.149901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.149934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.149965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.150982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.151986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.152993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.153033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.153065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.153097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.153137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.153169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.153201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.153234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.153271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.153302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 true 00:09:09.587 [2024-10-07 14:19:33.153337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.153377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.153408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.153436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.587 [2024-10-07 14:19:33.153470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.153502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.153539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.153569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.153601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.153631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.153667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.153701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.153732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.153761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.153795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.153829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.153863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.154480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.154517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.154553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.154585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.154620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.154657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.154691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.154723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.154778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.154813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.154844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.154876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.154908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.154941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.154972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.155977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.156985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.157022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.157056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.157088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.157120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.157154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.157187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.157220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.157251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.157282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.157327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.157360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.588 [2024-10-07 14:19:33.157391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.157424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.157456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.157490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.157523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.157556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.157614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.157648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.157680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.157718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.157750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.157782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.157813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.157846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.157884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.157917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.157948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.157992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.158893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.159704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.159739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.159772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.159804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.159834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.159866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.159898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.159930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.159970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.160995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.161035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.161070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.161107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.161140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.161174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.161211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.161246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.161285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.161318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.161351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.161389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.589 [2024-10-07 14:19:33.161422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.161457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.161488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.161518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.161548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.161587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.161621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.161651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.161683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.161715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.161749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.161781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.161815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.162996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.163990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.164029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.164059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.164728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.164763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.164797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.164834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.164865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.164898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.164931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.164962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.165013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.165044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.165079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.165117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.165151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.590 [2024-10-07 14:19:33.165185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.165998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.166990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.167993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.591 [2024-10-07 14:19:33.168768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.168802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.168831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.168863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.168896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.168931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.168970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.169978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.170998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.171983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.592 [2024-10-07 14:19:33.172644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.172675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.172707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.172742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.172774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.172808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.172839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.172877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.172909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.172942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.172977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.173996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.174986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.175937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.176572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.176610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.176644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.176682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.176714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.176748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.176780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.176813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.176846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.176880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.176910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.176943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.593 [2024-10-07 14:19:33.176977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.177987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.178970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.179997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.180033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.180069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.594 [2024-10-07 14:19:33.180107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.180835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:09.595 [2024-10-07 14:19:33.181156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.181200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:09.595 [2024-10-07 14:19:33.181232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.181268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.181300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.181335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.181371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.181403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.181435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.181472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.181504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.595 [2024-10-07 14:19:33.181535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.181571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.181604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.181637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.181676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.181707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.181993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.182985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.595 [2024-10-07 14:19:33.183989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.184991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.185850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.186981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.596 [2024-10-07 14:19:33.187829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.187867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.187897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.187929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.187958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.187989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.188027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.188060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.188097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.188129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.188161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.188192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.188230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.188261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.188293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.188322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.188732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.188771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.188802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.188842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.188872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.188907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.188939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.188974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.189982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.190811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.191525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.191563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.191597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.191625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.191657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.191690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.191722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.191773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.191805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.191836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.191867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.191904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.191938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.191981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.192020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.192052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.192114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.192145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.192180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.192214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.192246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.192279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.192314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.192348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.192382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.597 [2024-10-07 14:19:33.192418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.192449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.192482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.192517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.192551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.192593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.192625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.192659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.192705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.192737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.192770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.192799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.192831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.192868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.192899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.192931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.192969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.193983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.194995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.598 [2024-10-07 14:19:33.195649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.195680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.195710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.195742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.195775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.195813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.195848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.195880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.195910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.195948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.195979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.196987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.197966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.198013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.198049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.198084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.198126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.198158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.198194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.198225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.198258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.198290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.198319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.198350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.198378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.198407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.198434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.199978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.200015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.599 [2024-10-07 14:19:33.200050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.200978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.201997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.202973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.203009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.203043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.203073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.203104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.203144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.203177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.203208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.203247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.203275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.203310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.203342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.203377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.203410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.203809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.203849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.203882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.203916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.600 [2024-10-07 14:19:33.203954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.203988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.204983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.205862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.206981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.601 [2024-10-07 14:19:33.207772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.207804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.207842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.207874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.207904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.207935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.207966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.207997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.208034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.208066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.208099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.208136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.208168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.208201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.208257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.208290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.208325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.208359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.208392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.208970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.209982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.210986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.211708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.212007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.212043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.602 [2024-10-07 14:19:33.212072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.212978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.213948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.214975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.215013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.215052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.215083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.215113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.215144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.215180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.215213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.215245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.603 [2024-10-07 14:19:33.215274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.215306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.215342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.215373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.215404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.215437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.215475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.215506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.215537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.215567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.215603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.215638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.215668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.215703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.216974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.217990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.218847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:09.604 [2024-10-07 14:19:33.219315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.219354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.219387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.219418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.219452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.219486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.219519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.219551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.219583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.604 [2024-10-07 14:19:33.219618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.219652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.219684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.219720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.219752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.219783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.219817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.219852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.219883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.219915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.219947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.219976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.220832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.221034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.221072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.221105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.221137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.221168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.605 [2024-10-07 14:19:33.221205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.892 [2024-10-07 14:19:33.221237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.892 [2024-10-07 14:19:33.221271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.892 [2024-10-07 14:19:33.221304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.892 [2024-10-07 14:19:33.221336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.892 [2024-10-07 14:19:33.221374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.892 [2024-10-07 14:19:33.221404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.892 [2024-10-07 14:19:33.221435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.892 [2024-10-07 14:19:33.221466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.221499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.221531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.221564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.221595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.221632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.221664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.221696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.221731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.221764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.221796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.221829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.221862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.221897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.221932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.221965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.222989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.223774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.224998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.225037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.225069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.225104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.225139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.225172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.225211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.225239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.225273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.225312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.225345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.225374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.225404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.225434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.893 [2024-10-07 14:19:33.225469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.225499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.225531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.225560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.225595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.225628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.225659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.225691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.225725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.225758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.225792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.225996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.226986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.227981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.228020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.228052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.228085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.228121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.228149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.228723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.228759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.228791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.228823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.228875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.228907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.228938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.228973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.894 [2024-10-07 14:19:33.229723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.229756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.229789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.229821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.229854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.229901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.229933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.229967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.230852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.231995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.232998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.233033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.233068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.233104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.233135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.233509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.233542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.233574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.233606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.895 [2024-10-07 14:19:33.233638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.233671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.233702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.233734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.233762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.233794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.233826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.233864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.233899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.233933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.233965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.233997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.234977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.235987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.236979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.237012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.237042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.237077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.237106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.237135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.237163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.237191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.237220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.237249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.237278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.237306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.237340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.896 [2024-10-07 14:19:33.237373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.237409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.237442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.237474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.237523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.237557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.237589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.237621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.237654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.237699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.237735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.237767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.237803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.237834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.237867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.237907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.237941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.237974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.238014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.238796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.238832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.238869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.238900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.238932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.238963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.238994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.239967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.240969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.241004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.241041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.241073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.241114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.241147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.241179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.241212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.897 [2024-10-07 14:19:33.241246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.241978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.242978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.243018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.243049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.243080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.243110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.243145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.243176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.243205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.243765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.243799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.243831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.243866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.243898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.243929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.243957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.243985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.244984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.245019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.245049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.245084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.245124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.245157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.898 [2024-10-07 14:19:33.245195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.245228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.245262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.245296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.245330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.245362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.245401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.245434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.245466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.245500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.245718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.245752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.245787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.245818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.245848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.245883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.245918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.245951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.245985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.246992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.247994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.248978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.249015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.249049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.249080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.249117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.249150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.899 [2024-10-07 14:19:33.249183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.249996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.250039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.250073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.250103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.250138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.250170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.250204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.250237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.250269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.250317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.250351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.250796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.250831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.250863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.250895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.250934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.250969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.251973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.252986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.253028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.253061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.253093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.253130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.253163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.900 [2024-10-07 14:19:33.253197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.253998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.254973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.255999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.256039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.256071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.256104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.256137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.256167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.256199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.256230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.256268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.256297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.256330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.256361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.256392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.256431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.256463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.256494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.256524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.901 [2024-10-07 14:19:33.256557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.256592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.256625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.256660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.256695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.256724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.256763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.256794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.256826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.256856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.256890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.256920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.256952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.257575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.257613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.257643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.257679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.257711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.257744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.257780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.257813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.257846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.257884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.257918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.257948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:09.902 [2024-10-07 14:19:33.258170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.258990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.259980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.902 [2024-10-07 14:19:33.260724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.260758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.260789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.260820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.260851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.260888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.260922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.260952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.260992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.261862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.262996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.263981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.264015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.264048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.264082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.264115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.264147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.264186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.264219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.264251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.264280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.264316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.264348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.264769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.264803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.264842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.264876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.903 [2024-10-07 14:19:33.264912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.264945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.264975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.265991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.266883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.267978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.904 [2024-10-07 14:19:33.268781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.268810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.268842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.268880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.268915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.268948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.268979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.269992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.270987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.271825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.272199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.272240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.272273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.272306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.272369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.272402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.272435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.272471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.272501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.272535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.272568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.905 [2024-10-07 14:19:33.272599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.272636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.272668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.272698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.272733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.272764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.272800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.272831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.272863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.272903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.272933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.272966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.272997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.273988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.274983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.275966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.276006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.276037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.276071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.276105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.276137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.276169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.276203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.276235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.276271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.276306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.276334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.276367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.906 [2024-10-07 14:19:33.276400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.276430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.276466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.276497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.276529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.276567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.276598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.276631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.276668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.276700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.276733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.276769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.277982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.278996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.279967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.907 [2024-10-07 14:19:33.280600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.280638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.280671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.280704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.280748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.280780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.280815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.280846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.280879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.280912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.280945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.280976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.281782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.282964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.283967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.284005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.284046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.284080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.284111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.284140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.284172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.284204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.284236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.284270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.284647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.284684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.284718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.908 [2024-10-07 14:19:33.284754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.284788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.284821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.284854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.284888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.284920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.284957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.284993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.285777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.286971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.287982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.288021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.288054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.288087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.288125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.288160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.288196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.288229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.288262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.288392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.288425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.909 [2024-10-07 14:19:33.288458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.288493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.288525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.288557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.288596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.288627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.288659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.288692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.288743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.288775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.288809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.288842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.288875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.288905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.288936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.288969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.289987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.290969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.291971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.292009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.292043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.292082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.292116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.292147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.292177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.292209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.292244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.292276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.292307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.292339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.910 [2024-10-07 14:19:33.292375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.292407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.292439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.292471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.292503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.292535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.292570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.292605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.292638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.292699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.292731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.292762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.292800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.292837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.292867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.292905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.292936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.292970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.293972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.294994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.295987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.296029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.296060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.296094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.296130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.296161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.296193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.296230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.296265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.296298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.911 [2024-10-07 14:19:33.296336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.296368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.296604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.296640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.296680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.296710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.296742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.296772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.296810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.296839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.296872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.296906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.296940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.296973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:09.912 [2024-10-07 14:19:33.297536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.297973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.298984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.299020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.299058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.299092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.299125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.299160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.299193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.299227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.299262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.299293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.299752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.299788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.299820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.299859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.299891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.299924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.299955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.299991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.912 [2024-10-07 14:19:33.300639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.300671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.300707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.300738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.300771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.300803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.300834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.300866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.300899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.300933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.300981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.301938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.302992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.303833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.304198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.304235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.304265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.304301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.304332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.304368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.304404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.304436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.304469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.304505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.304538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.304570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.304601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.304637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.304672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.913 [2024-10-07 14:19:33.304705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.304739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.304770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.304809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.304843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.304877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.304908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.304944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.304975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.305989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.306860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.307986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.308025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.308055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.308086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.308118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.914 [2024-10-07 14:19:33.308149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.308182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.308218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.308251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.308286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.308317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.308350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.308384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.308417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.308448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.308481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.308512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.308543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.308576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.308605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.308638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.308677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.308709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.308741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.309968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.310966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.311766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.312227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.312263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.312294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.312328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.312360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.312392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.312426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.312457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.312488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.915 [2024-10-07 14:19:33.312526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.312556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.312589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.312620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.312658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.312687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.312718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.312750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.312780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.312818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.312848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.312880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.312920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.312955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.312987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.313897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.314990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.315969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.316004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.316037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.316076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.316111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.316143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.316181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.316214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.916 [2024-10-07 14:19:33.316247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.316565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.316609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.316640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.316672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.316707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.316740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.316773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.316805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.316839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.316878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.316910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.316944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.316977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.317995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.318692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.319982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.320021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.320054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.320084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.320117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.320148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.320180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.320216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.320250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.320282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.320316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.320348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.320384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.917 [2024-10-07 14:19:33.320418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.320454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.320488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.320523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.320562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.320593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.320625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.320665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.320699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.320734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.320772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.320806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.320839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.320872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.320908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.320939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.320979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.321964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.322977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.323723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.324103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.324142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.324175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.324207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.918 [2024-10-07 14:19:33.324239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.324271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.324308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.324342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.324374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.324411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.324443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.324476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.324509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.324542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.324573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.324613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.324645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.324926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.324960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.324990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.325993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.326969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.327998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.328035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.328068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.328100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.919 [2024-10-07 14:19:33.328135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.328168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.328199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.328255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.328288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.328319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.328354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.328385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.328419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.328452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.328485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.328521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.328555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.328588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.328619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.328651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.328689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.328720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.329982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.330971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.331010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.331044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.331081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.331114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.331153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.331187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.920 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.920 [2024-10-07 14:19:33.510920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.510974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.511012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.511760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.511799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.511830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.511866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.511897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.511929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.511960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.512008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.512040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.512073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.512110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.512139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.512170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.920 [2024-10-07 14:19:33.512200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.512973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.513987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.514995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.515027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.515056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.515172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.515201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.515228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.515256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.515284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.515312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.515340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.515369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.515398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.515428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.515463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.515495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.515526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.921 [2024-10-07 14:19:33.515561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.515596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.515626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.515659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.515692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.515731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.515762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.516981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.517988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.518975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.519011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.519047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.519085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.519115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.519146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.519181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.519210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.519245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.922 [2024-10-07 14:19:33.519279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.519893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.520654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.521992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:09.923 [2024-10-07 14:19:33.522662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.522968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.523007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.523041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.523073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.523106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.523138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.523168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.523201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.523234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.523271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.523304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.923 [2024-10-07 14:19:33.523347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.523988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.524752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.525973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.526996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.527033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.527065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.527099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.527132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.527162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.527194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.527226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.527259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.924 [2024-10-07 14:19:33.527289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.527324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.527359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.527507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.527542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.527576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.527612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.527644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.527677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.527707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.527739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.527771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.527804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.527834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.527871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.527902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.527938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.527970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.528987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.529996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.530994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.531033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.531067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.531098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.925 [2024-10-07 14:19:33.531133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.531931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.532511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.532545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.532579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.532613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.532644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.532680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.532710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.532740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.532775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.532806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.532839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.532870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.532907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.532937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.532970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.533998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.926 [2024-10-07 14:19:33.534921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.534954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.534987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.535982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.536980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.537996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.927 [2024-10-07 14:19:33.538680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.538716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.538746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.538776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.538806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.538838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.538870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.538906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.538937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.538968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539003] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.539976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:09.928 [2024-10-07 14:19:33.540580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:09.928 [2024-10-07 14:19:33.540936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.540972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.541997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.542036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.542069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.542102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.542137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.542170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.542198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.542226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.542255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.542636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.928 [2024-10-07 14:19:33.542667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.542695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.542722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.542751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.542787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.542815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.542843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.542872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.542900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.542928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.542956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.542984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.543972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.544993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.545980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.546031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.929 [2024-10-07 14:19:33.546065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.546976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.547978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.548612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.549971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.550011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.550042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.550075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.550110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.550143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.550172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.550203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.550236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.550271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.550303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.930 [2024-10-07 14:19:33.550334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.550979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.551010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.551038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.551066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.551094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.551819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.551856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.551888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.551923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.551953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.551985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.552967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.553888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.554056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.554092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.931 [2024-10-07 14:19:33.554125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.554993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.555983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.556976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.932 [2024-10-07 14:19:33.557884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.557916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.557949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.557984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.558988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.559976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.560933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:09.933 [2024-10-07 14:19:33.561297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.561976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.562013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.562047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.562083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.562115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.933 [2024-10-07 14:19:33.562154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.562981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.563974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.934 [2024-10-07 14:19:33.564789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.564823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.564855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.564888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.564921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.564953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.564987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.565628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.566977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.567989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.568024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.568054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.568084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.568114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.568149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.568178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.568212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.568244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.568274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.568306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.568343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.935 [2024-10-07 14:19:33.568375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.568417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.568449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.568478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.568512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.568545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.568577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.568614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.568646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.568680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.568711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.568745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.568781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.568814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.568847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.568878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.568911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.568943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.568978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.569874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.570732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.571995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.572027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.572055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.572083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.572114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.572142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.572171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.572199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.572235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.572263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.572296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.572331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.572363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.572394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.572425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.572457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.936 [2024-10-07 14:19:33.572490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.572537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.572570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.572602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.572634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.572667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.572708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.572739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.572772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.572804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.572835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.572865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.572905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.572938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.572966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.572994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.573975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.574819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.575988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.576026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.576060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.576092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.576124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.576154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.576186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.576221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.576253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.576284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.937 [2024-10-07 14:19:33.576316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.938 [2024-10-07 14:19:33.576352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.938 [2024-10-07 14:19:33.576386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.938 [2024-10-07 14:19:33.576420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.938 [2024-10-07 14:19:33.576469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.938 [2024-10-07 14:19:33.576500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.938 [2024-10-07 14:19:33.576534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.938 [2024-10-07 14:19:33.576567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.938 [2024-10-07 14:19:33.576601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.938 [2024-10-07 14:19:33.576633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.938 [2024-10-07 14:19:33.576664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.938 [2024-10-07 14:19:33.576695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:09.938 [2024-10-07 14:19:33.576729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.224 [2024-10-07 14:19:33.576759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.576793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.576825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.576867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.576898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.576934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.576964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.576995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.577978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.578015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.578053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.578084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.578113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.578146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.578606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.578642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.578675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.578708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.578745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.578780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.578813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.578845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.578875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.578914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.578946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.578978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.579901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.225 [2024-10-07 14:19:33.580935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.580966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.581992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.582981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.583966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.584011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.584044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.584076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.584127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.584160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.584192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.584227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.584258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.584291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.584323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.584358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.584397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.584429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.584462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.584497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.226 [2024-10-07 14:19:33.584528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.584560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.584593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.584624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.584655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585917] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.585974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.586978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.587996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.227 [2024-10-07 14:19:33.588640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.588673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.588704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.588735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.588773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.588804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.588835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.588866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.588897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.588937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.588969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.588997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.589988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.590989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.591981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.592357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.592392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.592423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.592455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.592488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.592523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.592555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.592587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.592620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.592653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.592688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.592721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.592754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.228 [2024-10-07 14:19:33.592798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.592830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.592861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.592893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.592923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.592952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.592982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.593995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.594038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.594070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.594102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.594140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.594173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.594202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.594238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.594269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.594298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.594330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.594361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.594394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.594971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.595975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.596012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.596043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.596078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.596116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.229 [2024-10-07 14:19:33.596150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.596998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.597975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.598971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.599985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:10.230 [2024-10-07 14:19:33.600025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.230 [2024-10-07 14:19:33.600060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.600992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.601791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.602983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.231 [2024-10-07 14:19:33.603875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.603910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.603943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.603977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.604971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.605987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.606665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.607960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.608007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.608039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.608069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.608104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.608136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.608168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.232 [2024-10-07 14:19:33.608199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.608997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.609991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.610982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.611017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.611053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.611088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.611121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.611152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.611186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.611218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.611249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.611282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.611313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.611344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.611378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.611413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.611448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.233 [2024-10-07 14:19:33.611481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.611511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.611545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.611586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.611617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.611977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.612996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.613972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.614968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.615009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.615040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.615071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.615105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.615135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.615167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.615201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.615237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.615267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.615301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.615332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.615368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.615400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.615439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.615471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.615503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.615535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.234 [2024-10-07 14:19:33.615566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.615601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.615634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.615667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.615700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.615734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.615766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.615801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.615833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.615864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.615898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.615928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.615957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.616007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.616038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.616069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.616104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.616135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.616165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.616197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.616229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.616264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.616299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.616334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.616375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.616407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.616441] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.616470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.616501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.617971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.618994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.235 [2024-10-07 14:19:33.619781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.619812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.619844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.619876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.619916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.619949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.619980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.620997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.621031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.621064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.621099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.621131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.621164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.621205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.621235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.621267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.621296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.621328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.621361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.621402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.621434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.621465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.621500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.622986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.236 [2024-10-07 14:19:33.623835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.623867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.623899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.623935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.623966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.624974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.625945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.626005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.626037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.626070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.626104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.626137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.626168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.626203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.626238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.626271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.626305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.626347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.626379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.626414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.626474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.626507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.626539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.627118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.627158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.627187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.627220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.627249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.627284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.627320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.627355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.627386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.627424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.627456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.627487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.627518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.627550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.627583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.237 [2024-10-07 14:19:33.627615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.627645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.627681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.627712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.627746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.627778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.627810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.627846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.627880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.627913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.627948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.627981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.628978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.629991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.630985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.631018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.631047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.631076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.631105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.631134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.238 [2024-10-07 14:19:33.631162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.631190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.631221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.631254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.631286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.631320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.631360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.631394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.631748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.631782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.631813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.631849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.631882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.631914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.631948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.631984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.632962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.633861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.634971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.635010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.635042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.635079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.635110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.635141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.635172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.635203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.635234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.635264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.635296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.635332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.239 [2024-10-07 14:19:33.635364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.635396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.635432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.635463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.635512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.635545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.635576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.635630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.635662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.635693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.635731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.635761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.635796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.635828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.635861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.635894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.635927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.635959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.635989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.636990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:10.240 [2024-10-07 14:19:33.637377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.637968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.638707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.639091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.639124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.639153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.639183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.639220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.639275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.639306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.639337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.639371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.639404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.240 [2024-10-07 14:19:33.639436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.639475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.639506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.639540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.639572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.639604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.639636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.639668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.639700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.639734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.639768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.639801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.639852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.639885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.639918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.639970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.640995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.641980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.642995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.643032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.643062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.643090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.643119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.643147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.643175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.643203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.643232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.241 [2024-10-07 14:19:33.643266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.643298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.643329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.643363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.643394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.643427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.643458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.643491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.643525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.643559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.643598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.643631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.643664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.644983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.645995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.646979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.647022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.647056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.647088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.647121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.242 [2024-10-07 14:19:33.647154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.647970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.648985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.649981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.243 [2024-10-07 14:19:33.650910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.650942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.650981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.651996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.652998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.653638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.654996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.655033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.655065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.655099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.655131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.655165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.655198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.655231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.655263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.655298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.655332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.655363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.655400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.244 [2024-10-07 14:19:33.655434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.655469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.655503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.655535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.655565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.655600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.655633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.655666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.655699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.655733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.655766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.655797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.655828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.655860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.655890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.655925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.655957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.655991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.656990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.657997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.658034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.658068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.658100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.658134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.658167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.658202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.658234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.658290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.658325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.658356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.658392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.658426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.658459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.658497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.658528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.658560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.658592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.658625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.659012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.659046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.659091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.659125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.659158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.659202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.659240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.659274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.659309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.659342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.659374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.659413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.245 [2024-10-07 14:19:33.659445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.659479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.659513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.659545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.659579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.659614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.659645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.659679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.659713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.659745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.659779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.659808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.659840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.659873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.659904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.659935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.659969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.660999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.661035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.661068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.661100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.661133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.661920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.661964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.661998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.662983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.663020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.663052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.663096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.663128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.663162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.663195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.663229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.663262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.246 [2024-10-07 14:19:33.663297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.663996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664936] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.664970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665526] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.665975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.666017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.666049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.666082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.666119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.666150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.666182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.666221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.666251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.666282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.666314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.247 [2024-10-07 14:19:33.667797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.667828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.667863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.667898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.667931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.667965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.668980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.669968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.670996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.671033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.671071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.671101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.671136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.671165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.671196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.671226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.671261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.671292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.671325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.671356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.671394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.671429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.248 [2024-10-07 14:19:33.671461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.671493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.671528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.671562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.671594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.671960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.671998] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.672996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.673943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.674972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.675011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.675047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.675079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.675115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.675147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.675179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.675214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.675251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.675282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.675319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.675358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.675392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.675423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.675455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.249 [2024-10-07 14:19:33.675488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.675519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.675551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.675585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.675623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.675656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.675687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.675735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.675766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.675796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.675836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.675868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.675900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.675934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.675966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.676006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.676040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.676077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.676106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.676141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.676173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.676204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.676244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.676274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.676304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.676343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.676375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.676407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:10.250 [2024-10-07 14:19:33.677087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.677973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.678975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.679012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.679043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.679075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.679106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.679310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.250 [2024-10-07 14:19:33.679341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679578] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.679983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.680972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.681012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.681047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.681080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.681111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.681143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.681177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.681212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.681246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.682967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.683006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.683043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.683075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.683113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.683144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.683176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.683214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.683246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.683278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.683315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.683348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.683379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.683497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.683531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.251 [2024-10-07 14:19:33.683563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.683594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.683626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.683659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.683697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.683730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.683760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.683794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.683825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.683854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.683885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.683921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.683957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.684981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.685979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.686016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.686048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.686082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.686113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.686145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.686179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.686210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.686241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.686278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.686309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.686341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.686402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.686435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.686467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.686501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.686534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.686572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.686964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.687005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.687041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.687075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.687107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.687138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.687173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.687205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.687235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.687274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.687307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.687340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.687375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.687407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.687440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.687475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.687508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.687543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.252 [2024-10-07 14:19:33.687576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.687609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.687642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.687675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.687707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.687748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.687779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.687812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.687846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.687878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.687912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.687946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.687979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688545] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688901] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.688969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.689973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.690980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.691018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.691051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.691085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.691115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.691148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.691181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.691214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.691248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.691282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.691313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.691347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.691385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.691420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.691453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.691489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.253 [2024-10-07 14:19:33.691523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.691556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.691590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.691626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.691660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.691693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.691748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.691779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.691814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.691847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.691883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.691916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.691948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.691980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692509] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.692968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.693005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.693799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.693840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.693876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.693909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.693942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.693977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694681] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.694973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.695012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.695044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.695075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.695113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.695148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.695179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.695210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.695243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.695274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.695307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.695338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.695372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.695401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.695431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.254 [2024-10-07 14:19:33.695460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.695488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.695516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.695550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.695582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.695610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.695638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.695667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.695696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.695725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.695781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.695814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.695845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.695879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.695911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.696974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.697993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.698983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.699026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.699060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.699093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.699132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.699165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.699197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.699231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.699263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.699301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.699333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.699366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.699399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.255 [2024-10-07 14:19:33.699432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.699466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.699502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.699535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.699567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.699599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.699637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.699668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.699701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.699736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.699769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.699801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.699833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.699861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.699894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.699935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 true 00:09:10.256 [2024-10-07 14:19:33.699966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.700696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.701979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.702993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703747] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.256 [2024-10-07 14:19:33.703841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.703878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.703908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.703940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.703973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704308] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.704997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.705971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706725] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.706982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.257 [2024-10-07 14:19:33.707992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708715] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.708980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.709996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.710036] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.710066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.710102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.710134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.710165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.710200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.710233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.710264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.710300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.710334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.710367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.710400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.710436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.710467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.710498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.710529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.711996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.712032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.712062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.712097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.712125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.712158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.712192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.712224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.712258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.712291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.712324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.258 [2024-10-07 14:19:33.712354] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.712390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.712445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.712478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.712510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.712541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.712574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.712623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.712654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.712686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.712718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.712749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.712781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.712813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.712845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.712888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.712922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.712952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.712984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713218] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.713970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.714978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.715996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:10.259 [2024-10-07 14:19:33.716815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.716975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.717014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.717046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.717081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.717110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.259 [2024-10-07 14:19:33.717138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717929] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.717996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.718975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719378] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.719985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.720970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.260 [2024-10-07 14:19:33.721603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.721634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.721667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.721703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.721734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.721765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.721796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.721827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.721859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.721890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.721921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.721951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.721991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.722906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.723228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.723260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.723288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.723317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.723349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.723380] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.723415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.723448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.723480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.723514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.723547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.723579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.723609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.723640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.723673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.723710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.723741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.724982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725498] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.725978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.726021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.726053] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.726084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.726123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.726153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.726183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.726216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.726247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.261 [2024-10-07 14:19:33.726280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.726970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:10.262 [2024-10-07 14:19:33.727217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727252] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727645] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.262 [2024-10-07 14:19:33.727773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.727842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.728988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729157] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.729981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.730015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.730043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.262 [2024-10-07 14:19:33.730071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.730099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.730126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.730154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.730185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.730215] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.730249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.730615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.730650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.730685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.730721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.730754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.730789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.730822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.730854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.730891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.730924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.730958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.730994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.731991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.732687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.733995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.263 [2024-10-07 14:19:33.734792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.734824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.734856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.734886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.734920] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.734954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.734985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.735025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.735059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.735089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.735123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.735156] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.735186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.735770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.735805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.735840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.735875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.735915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.735949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.735984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736690] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.736990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.737857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.738972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739765] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.264 [2024-10-07 14:19:33.739864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.739896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.739926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.739958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.739992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.740031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.740066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.740100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.740133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.740744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.740782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.740819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.740851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.740884] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.740916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.740948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.740990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.741980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.742844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.743006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.743041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.743072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.743104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.743138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.743172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.743205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.743239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.743272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.743306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.265 [2024-10-07 14:19:33.743343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.743377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.743408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.743445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.743477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.743516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.743556] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.743589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.743621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.743654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.743685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.743724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.743754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.743787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.743817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.743849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.743886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.743918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.743949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744951] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.744981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.745016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.745056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.745088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.745121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.745949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.745983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.266 [2024-10-07 14:19:33.746891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.746922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.746952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.746985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747789] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.747982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.748972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749117] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.749980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.750013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.750047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.750078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.750113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.750145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.750177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.750208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.750240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.750276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.750306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.750338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.750750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.750786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.750819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.750851] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.750881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.267 [2024-10-07 14:19:33.750911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.750940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.750974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.751970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752185] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752500] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.752731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753480] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753552] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.753991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754481] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.268 [2024-10-07 14:19:33.754656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.754688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.754718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.754749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.754785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.755164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.755205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.755239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.755270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.755304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.755337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.755370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.755405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.755438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.755473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.755507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.755538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.755571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:10.269 [2024-10-07 14:19:33.755813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.755848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.755880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.755915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.755950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.755981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756153] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.756982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.757889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.269 [2024-10-07 14:19:33.758833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.758866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.758898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.758930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.758962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.758994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759730] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.759990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.760027] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.760056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.760087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.760122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.760154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.760187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.760841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.760878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.760916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.760945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.760976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761163] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761499] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761800] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.761995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.762033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.762064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.762096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.762128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.762158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.762195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.762225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.762256] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.762288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.762330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.762362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.762394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.762425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.762463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.762497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.762531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.270 [2024-10-07 14:19:33.762562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.762593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.762626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.762661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.762694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.762728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.762763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.762796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.762828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.762859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.762892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.762924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763455] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.763995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.764973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.765006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.765039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.765075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.765109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.765139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.765174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.765775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.765812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.765844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.765876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.765910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.765940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.765975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766412] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766450] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.271 [2024-10-07 14:19:33.766677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.766712] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.766744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.766776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.766806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.766839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.766871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.766904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.766934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.766967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767231] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.767994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768348] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768837] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.768980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769853] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769885] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.769984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.770028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.770063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.770637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.770673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.770704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.770740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.272 [2024-10-07 14:19:33.770771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.770803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.770838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.770872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.770904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.770940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.770972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771063] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771760] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.771960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.772994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773068] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773099] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773288] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773612] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.773972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.774008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.774043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.774072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.774106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.774143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.774171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.774199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.774227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.774255] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.774283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.774311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.774339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.273 [2024-10-07 14:19:33.774368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.774396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.774425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.774453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.774482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.774510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.774540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.774567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.774595] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.774623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.774651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.774679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.774707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.774735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.774763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.774791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.774819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.774848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.774876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.775184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.775216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.775245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.775276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.775306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.775342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.775375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.775408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.775443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.775476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.775508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.775540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.775573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.775603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.775643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.775675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.775708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776342] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.776997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.777972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.778010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.778042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.778075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.778113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.778145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.778177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.778209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.778241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.778274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.778307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.778338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.778372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.778402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.778432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.778463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.778494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.274 [2024-10-07 14:19:33.778527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.778560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.778594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.778624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.778656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.778692] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.778724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.778757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.778792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.778826] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.778858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.778891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.778923] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.778958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.778988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779192] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779319] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779420] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779514] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779648] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779678] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.779993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.780032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.780065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.780098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.780129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.780159] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.780191] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.780228] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.780261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.780293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.780328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.780360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.780394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.780426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.780459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.780492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.780527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.780965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781104] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781356] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781516] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.781975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.275 [2024-10-07 14:19:33.782008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782318] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782616] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782682] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782952] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.782984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783090] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783375] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783906] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.783974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.784717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785908] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.785973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.786012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.786044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.276 [2024-10-07 14:19:33.786081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786244] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.786977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.787971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788100] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788422] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788454] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.788971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789010] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.277 [2024-10-07 14:19:33.789827] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.789873] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.789905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.789938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.789970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.790904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791489] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.791997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792198] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.792975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.278 [2024-10-07 14:19:33.793708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.793741] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.793775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.793805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.793863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.793896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.793928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.793963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.793996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.794034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.794071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.794101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.794132] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.794162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.794190] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:10.279 [2024-10-07 14:19:33.794226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.794261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.794295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.794325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.794358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.794391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.794426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795286] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795352] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795914] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.795975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796541] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796895] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.796963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.797004] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.797038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.797071] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.797103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.797136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.797170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.797202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.797232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.797364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.797394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.797426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.797457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.797488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.797518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.797550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.279 [2024-10-07 14:19:33.797582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.797628] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.797660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.797695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.797745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.797779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.797812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.797863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.797893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.797924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.797956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.797988] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.798986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799119] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799353] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799680] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.799965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800130] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800168] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.800926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.801189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.801234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.801265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.801296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.801329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.801361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.801395] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.801428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.801461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.801496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.801529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.801560] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.801594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.280 [2024-10-07 14:19:33.801624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.801657] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.801689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.801722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.801754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.801792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.801829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.801862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.801896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.801933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.801966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802120] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802846] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.802976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.803011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.803045] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.803080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.803115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.803149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.803181] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.803211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.803248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.803283] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.803315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.803938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.803974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804016] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804289] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804321] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804351] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804386] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804842] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.804969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805006] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805654] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805687] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.281 [2024-10-07 14:19:33.805719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.805752] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.805784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.805818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.805854] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.805883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.805919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.805953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.805985] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806220] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806253] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806476] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806742] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.806970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807048] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807295] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807423] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807653] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807746] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.807974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.808009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.808039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.808067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.808095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.808123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.808152] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.808183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.808211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.808239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.808268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.808296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.808326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809134] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809505] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.809989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.810024] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.810066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.810101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.810133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.282 [2024-10-07 14:19:33.810169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810312] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810345] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810413] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.810971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811634] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.811971] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812280] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.283 [2024-10-07 14:19:33.812705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.812733] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.812866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.812898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.812933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.812966] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.812999] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.813046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.813079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.813111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.813139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.813172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.813203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.813232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.813264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.813302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.813330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.813363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.813397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.813431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.813462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.813497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.813528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.813564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.813597] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814213] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814624] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814759] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814866] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.814960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815123] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815151] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.815996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816031] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816273] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816360] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816444] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816586] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.284 [2024-10-07 14:19:33.816643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.816671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.816699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.816729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.816762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.816795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.816832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.816865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.816905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.816938] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.816970] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817043] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817808] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817934] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.817968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818110] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818373] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818406] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818861] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.818992] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819832] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.819969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820008] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820138] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820647] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820713] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820881] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.285 [2024-10-07 14:19:33.820912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.820946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.820976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821079] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821206] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821341] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821434] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821704] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821780] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821921] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.821987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.822025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.822061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.822092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.822121] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.822155] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.822187] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.822219] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.822254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.822284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.822316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.822347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.822383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.822416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.822451] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.822488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.822520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823478] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823511] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823542] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823572] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823699] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823833] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823931] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.823962] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824402] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824532] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824564] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824596] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824661] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824722] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824754] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824785] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824817] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824848] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.286 [2024-10-07 14:19:33.824880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.824912] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.824946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.824978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825044] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825469] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825533] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825604] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825735] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825766] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825797] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.825987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826366] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826464] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826531] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826633] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.826976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.827015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.827047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.827076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.827108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.827139] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.827172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.827205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.827236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.827267] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.827300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.827333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.827363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.827394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828200] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828284] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828627] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828831] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828964] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.828994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.829034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.829067] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.829098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.829127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.829161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.287 [2024-10-07 14:19:33.829194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829426] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829458] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829615] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829709] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.829993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830102] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830300] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830518] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830573] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830913] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.830978] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831240] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831272] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831338] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831669] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831772] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831836] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831930] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.831963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.832005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.832038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.832072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.832111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.832144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.832178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.832212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.832245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.832285] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.832316] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.832349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.832388] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.832419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.832456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.832488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.288 [2024-10-07 14:19:33.832519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.832554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.832587] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.832618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.832656] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.833425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.833463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.833496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.833529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.833562] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.833598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.833632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.833667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 Message suppressed 999 times: [2024-10-07 14:19:33.833703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 Read completed with error (sct=0, sc=15) 00:09:10.289 [2024-10-07 14:19:33.833739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.833770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.833802] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.833835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.833870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.833902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.833935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.833976] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834107] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834171] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834502] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834703] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834734] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834898] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834933] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.834997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835038] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835270] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835302] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835376] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835635] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835667] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835723] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835905] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835937] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.835973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836009] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836072] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836172] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836205] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836271] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836336] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836374] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836440] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836543] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836608] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836694] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836724] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836757] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.289 [2024-10-07 14:19:33.836790] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.836822] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.836857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.836889] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.836925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837011] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837085] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837304] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837335] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837535] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837567] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.837807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838357] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838625] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838794] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838830] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838862] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.838965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839149] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839180] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839216] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839250] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839317] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839347] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839381] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839415] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839446] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839816] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.839979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840017] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840052] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840122] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840184] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840212] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840279] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840744] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840806] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840960] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.840993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.841040] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.841080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.290 [2024-10-07 14:19:33.841113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841144] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841232] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841689] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841758] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841793] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841860] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841892] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.841993] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842103] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842167] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842201] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842237] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842268] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842368] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842401] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842551] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.842804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.843392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.843430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.843462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.843495] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.843529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.843566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.843598] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.843631] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.843665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.843698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.843731] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.843768] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.843805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.843838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.843874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.843907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.843940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.843972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844015] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844051] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844084] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844260] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844427] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844566] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844799] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844872] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844904] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.844973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.845013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.845042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.845076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.845108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.291 [2024-10-07 14:19:33.845143] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845382] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845663] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845701] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845732] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845804] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845940] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.845973] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846182] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846639] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846807] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846839] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846943] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.846972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847214] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847315] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847448] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847483] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847515] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847547] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847580] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847646] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.847812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.848186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.848223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.848257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.848299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.848332] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.848363] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.848392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.848425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.848460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.848491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.848522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.848558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.848590] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.848621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.848660] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.292 [2024-10-07 14:19:33.848695] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.848726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.848761] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.848791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.848824] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.848855] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.848886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.848922] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.848953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.848987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849020] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849211] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849245] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849276] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849340] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849371] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849403] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849522] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849702] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849838] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849871] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849956] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.849984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850077] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850707] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850739] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850770] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850820] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850890] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.850995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851034] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851202] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851233] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851299] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851429] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851460] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851494] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851593] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851728] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851762] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851795] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851828] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851963] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.851995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.852032] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.852070] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.852101] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.852137] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.852169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.852199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.852230] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.852269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.852303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.852334] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.852367] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.852400] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.293 [2024-10-07 14:19:33.852435] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.852466] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.852497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.852537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.852571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.852605] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.852643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.852675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853081] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853186] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853247] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853287] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853320] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853424] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853459] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853492] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853561] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853629] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.853974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854243] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854275] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854306] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854372] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854407] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854470] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854570] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854737] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854774] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854876] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.854974] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855082] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855114] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855150] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855546] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855617] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855717] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855815] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855878] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855942] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.855975] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856012] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856049] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856083] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856115] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856177] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856238] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856307] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856339] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856437] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856467] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856503] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856536] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.294 [2024-10-07 14:19:33.856601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.856632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.856664] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.856697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.856726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.856755] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.856783] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.856811] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.856840] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.856868] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.856897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.856926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.856954] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.856987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.857026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.857059] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.857091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.857125] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.857158] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.857194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.857225] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.857257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.857292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.857331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.857365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.857398] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.857430] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.857463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.857496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.857527] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.857558] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858135] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858305] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858344] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858409] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858439] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858474] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858539] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858571] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858603] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858636] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858671] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858705] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858775] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858947] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.858983] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859089] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859126] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859234] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859265] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859326] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859391] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859456] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859491] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859525] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859559] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859688] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859719] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859751] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859819] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859918] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.859984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.860023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.860060] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.860095] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.860129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.860166] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.860199] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.860229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.860377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.860411] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.860443] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.860479] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.860512] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.860548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.860585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.860618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.860652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.295 [2024-10-07 14:19:33.860686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.860718] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.860753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.860787] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.860821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.860856] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.860891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.860925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.860961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.860995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861035] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861066] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861098] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861165] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861197] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861298] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861331] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861364] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861433] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861468] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861665] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861764] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861859] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861894] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861961] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.861995] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862033] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862065] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862127] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862241] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862303] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862404] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862438] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862472] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862507] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862888] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862925] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.862991] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863029] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863226] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863258] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863291] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863323] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863390] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863814] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863850] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863939] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863967] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.863996] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.864039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.864073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.864106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.864140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.864173] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.864208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.864261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.864294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.864325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.864361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.864393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.864425] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.864463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.296 [2024-10-07 14:19:33.864496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.864529] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.864568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.864602] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.864637] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.864668] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.864700] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.864740] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.864773] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.864805] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.864843] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.864874] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.864907] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.864946] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.864980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865019] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865050] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865164] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865193] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865222] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865251] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865310] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865575] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865607] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865638] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865673] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865710] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865776] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865847] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865880] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865948] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.865980] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866023] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866055] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866088] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866264] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866297] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866330] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866396] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866461] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866504] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866537] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866574] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866674] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866736] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866769] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866900] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866965] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.866997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867069] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867170] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867203] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867236] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867269] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867301] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867333] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867384] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867419] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867453] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867488] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867554] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867592] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867623] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867655] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.867684] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.868021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.868058] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.868091] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.868124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.868160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.868195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.868227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.868263] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.868294] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.868327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.868365] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.868399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.868432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.868477] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.297 [2024-10-07 14:19:33.868508] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.868540] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.868581] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.868611] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.868643] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.868676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.868708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.868743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.868779] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.868812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.868845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.868882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.868915] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.868949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.868981] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869022] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869093] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869124] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869194] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869292] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869325] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869358] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869392] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869428] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869457] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869490] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869524] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869565] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869599] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869630] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869666] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869698] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869767] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869801] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869865] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869897] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.869989] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.870028] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.870062] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.870094] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.870129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.870796] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.870834] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.870867] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.870902] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.870935] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.870968] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871007] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871041] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871109] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871141] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871174] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871207] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871239] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871274] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871309] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871337] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871370] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871405] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871442] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871475] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871506] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871538] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871576] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871606] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871679] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871711] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871743] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871778] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871810] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871882] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871950] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.871987] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872025] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872057] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872092] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872128] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872161] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872322] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872350] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872408] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872436] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872550] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.298 [2024-10-07 14:19:33.872583] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.872626] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.872659] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.872691] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.872756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.872792] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.872823] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.872863] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.872896] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.872928] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873145] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873223] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873259] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873329] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873394] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873463] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873496] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873534] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873569] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873601] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873641] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873675] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873708] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873745] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873777] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873809] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873844] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873875] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873910] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873941] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.873972] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874080] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:10.299 [2024-10-07 14:19:33.874113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874183] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874248] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874277] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874313] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874385] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874416] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874487] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874520] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874555] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874588] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874620] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874650] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874716] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874750] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874786] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874849] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874879] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874911] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874944] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.874977] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875013] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875042] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875108] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875140] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875175] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875208] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875609] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875676] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875714] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875782] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875818] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875883] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875916] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875949] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.875982] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.876021] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.876054] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.876087] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.876118] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.876154] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.876189] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.876224] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.876257] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.876290] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.876324] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.876355] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.876387] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.876421] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.876452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.876484] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.876517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.876548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.299 [2024-10-07 14:19:33.876591] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.876622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.876652] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.876686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.876721] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.876749] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.876781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.876813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.876852] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.876887] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.876919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.876953] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.876986] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877026] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877056] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877086] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877116] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877148] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877179] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877210] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877242] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877278] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877346] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877379] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877410] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877447] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877486] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877521] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877553] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877584] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877621] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.877683] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878061] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878096] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878129] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878196] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878229] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878262] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878314] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878349] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878383] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878418] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878449] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878482] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878519] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878549] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878582] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878613] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878644] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878677] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878706] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878738] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878771] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878803] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878835] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878870] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878903] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878932] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.878969] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879005] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879075] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879113] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879147] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879178] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879209] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879246] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879281] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879311] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879343] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879377] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879414] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879452] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879485] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879517] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879589] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879618] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879649] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879685] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879748] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879781] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879813] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879845] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879877] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879909] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879945] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.879979] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.300 [2024-10-07 14:19:33.880014] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880046] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880078] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880112] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880146] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880513] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880548] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880579] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880619] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880651] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880686] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880720] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880753] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880791] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880821] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880858] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880891] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880926] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880958] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.880997] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881039] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881074] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881111] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881142] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881176] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881217] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881249] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881282] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881361] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881393] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881462] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881493] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881523] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881563] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881594] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881622] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881658] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881693] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881726] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881788] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881825] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881857] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881886] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881919] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881959] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.881994] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882037] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882073] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882106] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882136] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882169] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882204] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882235] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882266] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882296] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882362] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882397] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882431] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882471] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882510] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882544] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882577] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882610] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882640] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.882672] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883328] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883369] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883399] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883432] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883465] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883497] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883528] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883568] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883600] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883632] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883662] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883696] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883729] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883763] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883798] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883829] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883864] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883893] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883924] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883957] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.883990] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.884030] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.884064] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.884097] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.884131] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.884162] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.884195] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.884227] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.884261] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.884293] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.884327] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.884359] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.884389] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.301 [2024-10-07 14:19:33.884417] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884445] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884473] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884501] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884530] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884557] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884585] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884614] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884642] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884670] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884697] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884727] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884756] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884784] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884812] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884841] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884869] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884899] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884927] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884955] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.884984] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.885018] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.885047] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.885076] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.885105] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.885133] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.885160] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.885188] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.885221] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 [2024-10-07 14:19:33.885254] ctrlr_bdev.c: 361:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:10.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.564 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.564 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:10.564 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:10.564 true 00:09:10.824 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:10.824 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.824 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.085 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:11.085 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:11.346 true 00:09:11.346 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:11.346 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.346 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.652 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:11.652 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:11.914 true 00:09:11.914 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:11.914 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.914 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.176 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:12.176 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:12.176 true 00:09:12.461 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:12.461 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.475 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.736 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:13.736 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:13.736 true 00:09:13.736 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:13.736 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.679 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.679 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.939 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:14.939 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:14.939 true 00:09:14.939 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:14.939 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.200 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.460 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:15.460 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:15.460 true 00:09:15.460 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:15.460 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.721 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.982 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:15.982 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:15.982 true 00:09:15.982 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:15.982 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.242 14:19:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.503 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:16.503 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:16.503 true 00:09:16.763 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:16.763 14:19:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.706 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.706 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.967 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:17.967 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:18.228 true 00:09:18.228 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:18.228 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.173 14:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.173 14:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:19.173 14:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:19.433 true 00:09:19.433 14:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:19.433 14:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.433 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.694 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:19.694 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:19.956 true 00:09:19.956 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:19.956 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.956 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.218 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:20.218 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:20.480 true 00:09:20.480 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:20.480 14:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.423 14:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.423 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:21.423 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:21.684 true 00:09:21.684 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:21.684 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.684 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.946 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:21.946 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:22.207 true 00:09:22.207 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:22.207 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.207 14:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.468 14:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:22.468 14:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:22.729 true 00:09:22.729 14:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:22.729 14:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.729 14:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.989 14:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:22.989 14:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:23.250 true 00:09:23.250 14:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:23.250 14:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.194 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.455 14:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.455 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:24.455 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:24.716 true 00:09:24.716 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:24.716 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.660 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.660 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:25.660 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:25.921 true 00:09:25.921 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:25.921 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.181 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.181 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:26.181 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:26.442 true 00:09:26.442 14:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:26.442 14:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.827 14:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.827 14:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:27.827 14:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:27.827 true 00:09:28.088 14:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:28.088 14:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.921 14:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.921 14:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:28.921 14:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:29.182 true 00:09:29.182 14:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:29.182 14:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.443 14:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.443 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:29.443 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:29.704 true 00:09:29.704 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:29.704 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.090 14:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.091 14:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:31.091 14:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:31.351 true 00:09:31.351 14:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:31.351 14:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.291 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.291 14:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.291 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.291 14:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:32.291 14:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:32.552 true 00:09:32.552 14:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:32.552 14:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.552 14:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.813 14:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:09:32.813 14:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:33.074 true 00:09:33.074 14:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:33.074 14:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.074 14:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.335 14:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:09:33.335 14:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:09:33.595 true 00:09:33.595 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:33.595 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.857 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.857 Initializing NVMe Controllers 00:09:33.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:33.857 Controller IO queue size 128, less than required. 00:09:33.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:33.857 Controller IO queue size 128, less than required. 00:09:33.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:33.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:33.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:33.857 Initialization complete. Launching workers. 00:09:33.857 ======================================================== 00:09:33.857 Latency(us) 00:09:33.857 Device Information : IOPS MiB/s Average min max 00:09:33.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2872.20 1.40 25989.30 1629.99 1104963.35 00:09:33.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15168.25 7.41 8438.21 1632.39 523182.52 00:09:33.857 ======================================================== 00:09:33.857 Total : 18040.44 8.81 11232.50 1629.99 1104963.35 00:09:33.857 00:09:33.857 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:09:33.857 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:09:34.119 true 00:09:34.119 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2799275 00:09:34.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2799275) - No such process 00:09:34.119 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2799275 00:09:34.119 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.380 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:34.380 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:34.380 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:34.380 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:34.380 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:34.380 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:34.641 null0 00:09:34.641 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:34.641 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:34.641 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:34.901 null1 00:09:34.901 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:34.901 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:34.901 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:34.901 null2 00:09:34.901 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:34.901 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:34.901 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:35.161 null3 00:09:35.161 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:35.161 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:35.161 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:35.422 null4 00:09:35.422 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:35.422 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:35.422 14:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:35.422 null5 00:09:35.422 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:35.422 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:35.422 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:35.683 null6 00:09:35.683 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:35.683 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:35.683 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:35.944 null7 00:09:35.944 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:35.944 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:35.944 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:35.944 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:35.944 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:35.944 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:35.944 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:35.944 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:35.944 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:35.944 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:35.944 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.944 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:35.944 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2805831 2805832 2805834 2805836 2805838 2805840 2805843 2805845 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:35.945 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.207 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:36.469 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.469 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.469 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:36.469 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:36.469 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:36.469 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.469 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:36.469 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:36.469 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:36.469 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:36.469 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.731 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:36.992 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:36.992 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:36.992 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:36.992 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:36.992 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:36.992 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.992 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.993 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:37.253 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:37.253 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.253 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:37.253 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:37.253 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:37.253 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:37.253 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:37.253 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:37.253 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.253 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.253 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:37.253 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.253 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.253 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:37.514 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.514 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.514 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:37.514 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.514 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.514 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:37.514 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.514 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.514 14:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:37.514 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.514 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.514 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:37.514 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.514 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.515 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:37.515 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.515 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.515 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:37.515 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:37.515 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.515 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:37.515 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:37.515 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:37.515 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:37.515 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:37.515 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:37.776 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.038 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:38.300 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.300 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.300 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:38.300 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:38.300 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.301 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:38.301 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:38.301 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:38.301 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:38.301 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:38.301 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:38.301 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.301 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.301 14:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:38.562 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:38.823 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.085 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:39.085 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:39.085 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:39.085 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:39.085 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:39.085 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.085 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.085 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:39.085 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.085 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.085 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:39.085 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:39.085 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.085 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.085 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:39.085 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.085 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.085 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.348 14:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.348 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:39.610 rmmod nvme_tcp 00:09:39.610 rmmod nvme_fabrics 00:09:39.610 rmmod nvme_keyring 00:09:39.610 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:39.871 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:09:39.871 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:09:39.871 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 2798788 ']' 00:09:39.871 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 2798788 00:09:39.871 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2798788 ']' 00:09:39.871 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2798788 00:09:39.871 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:09:39.871 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:39.871 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2798788 00:09:39.871 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:39.871 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:39.871 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2798788' 00:09:39.871 killing process with pid 2798788 00:09:39.871 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2798788 00:09:39.871 14:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2798788 00:09:40.442 14:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:40.442 14:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:40.442 14:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:40.442 14:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:09:40.442 14:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:09:40.442 14:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:40.442 14:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:09:40.442 14:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:40.442 14:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:40.442 14:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.442 14:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.442 14:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:42.989 00:09:42.989 real 0m49.278s 00:09:42.989 user 3m11.578s 00:09:42.989 sys 0m15.902s 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:42.989 ************************************ 00:09:42.989 END TEST nvmf_ns_hotplug_stress 00:09:42.989 ************************************ 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:42.989 ************************************ 00:09:42.989 START TEST nvmf_delete_subsystem 00:09:42.989 ************************************ 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:42.989 * Looking for test storage... 00:09:42.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:42.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.989 --rc genhtml_branch_coverage=1 00:09:42.989 --rc genhtml_function_coverage=1 00:09:42.989 --rc genhtml_legend=1 00:09:42.989 --rc geninfo_all_blocks=1 00:09:42.989 --rc geninfo_unexecuted_blocks=1 00:09:42.989 00:09:42.989 ' 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:42.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.989 --rc genhtml_branch_coverage=1 00:09:42.989 --rc genhtml_function_coverage=1 00:09:42.989 --rc genhtml_legend=1 00:09:42.989 --rc geninfo_all_blocks=1 00:09:42.989 --rc geninfo_unexecuted_blocks=1 00:09:42.989 00:09:42.989 ' 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:42.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.989 --rc genhtml_branch_coverage=1 00:09:42.989 --rc genhtml_function_coverage=1 00:09:42.989 --rc genhtml_legend=1 00:09:42.989 --rc geninfo_all_blocks=1 00:09:42.989 --rc geninfo_unexecuted_blocks=1 00:09:42.989 00:09:42.989 ' 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:42.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.989 --rc genhtml_branch_coverage=1 00:09:42.989 --rc genhtml_function_coverage=1 00:09:42.989 --rc genhtml_legend=1 00:09:42.989 --rc geninfo_all_blocks=1 00:09:42.989 --rc geninfo_unexecuted_blocks=1 00:09:42.989 00:09:42.989 ' 00:09:42.989 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:42.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:42.990 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:51.136 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:51.136 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:51.136 Found net devices under 0000:31:00.0: cvl_0_0 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:51.136 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:51.137 Found net devices under 0000:31:00.1: cvl_0_1 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:51.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:09:51.137 00:09:51.137 --- 10.0.0.2 ping statistics --- 00:09:51.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.137 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:09:51.137 00:09:51.137 --- 10.0.0.1 ping statistics --- 00:09:51.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.137 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=2811236 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 2811236 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2811236 ']' 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.137 14:20:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.137 [2024-10-07 14:20:13.950369] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:51.137 [2024-10-07 14:20:13.950464] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.137 [2024-10-07 14:20:14.058496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:51.137 [2024-10-07 14:20:14.237376] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.137 [2024-10-07 14:20:14.237424] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.137 [2024-10-07 14:20:14.237435] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.137 [2024-10-07 14:20:14.237446] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.137 [2024-10-07 14:20:14.237455] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.137 [2024-10-07 14:20:14.238934] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.137 [2024-10-07 14:20:14.238956] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.137 [2024-10-07 14:20:14.783477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.137 [2024-10-07 14:20:14.808039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.137 NULL1 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.137 Delay0 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.137 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.398 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.398 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2811374 00:09:51.398 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:51.398 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:51.398 [2024-10-07 14:20:14.935688] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:53.311 14:20:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:53.311 14:20:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.311 14:20:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 [2024-10-07 14:20:17.022223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027180 is same with the state(6) to be set 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 [2024-10-07 14:20:17.022751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026780 is same with the state(6) to be set 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 starting I/O failed: -6 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 [2024-10-07 14:20:17.028248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030000 is same with the state(6) to be set 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.573 Write completed with error (sct=0, sc=8) 00:09:53.573 Read completed with error (sct=0, sc=8) 00:09:53.574 Write completed with error (sct=0, sc=8) 00:09:53.574 Write completed with error (sct=0, sc=8) 00:09:53.574 Write completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Write completed with error (sct=0, sc=8) 00:09:53.574 Write completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Write completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Write completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Write completed with error (sct=0, sc=8) 00:09:53.574 Write completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Write completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Write completed with error (sct=0, sc=8) 00:09:53.574 Write completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Read completed with error (sct=0, sc=8) 00:09:53.574 Write completed with error (sct=0, sc=8) 00:09:54.516 [2024-10-07 14:20:18.002632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025d80 is same with the state(6) to be set 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Write completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Write completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Write completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Write completed with error (sct=0, sc=8) 00:09:54.516 Write completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Write completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Write completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Write completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 [2024-10-07 14:20:18.025524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027680 is same with the state(6) to be set 00:09:54.516 Write completed with error (sct=0, sc=8) 00:09:54.516 Write completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Write completed with error (sct=0, sc=8) 00:09:54.516 Write completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Write completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Write completed with error (sct=0, sc=8) 00:09:54.516 [2024-10-07 14:20:18.026075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026c80 is same with the state(6) to be set 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Write completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Write completed with error (sct=0, sc=8) 00:09:54.516 Write completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Read completed with error (sct=0, sc=8) 00:09:54.516 Write completed with error (sct=0, sc=8) 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 Write completed with error (sct=0, sc=8) 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 [2024-10-07 14:20:18.030067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030f00 is same with the state(6) to be set 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 Write completed with error (sct=0, sc=8) 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 Write completed with error (sct=0, sc=8) 00:09:54.517 Write completed with error (sct=0, sc=8) 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 Write completed with error (sct=0, sc=8) 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 Read completed with error (sct=0, sc=8) 00:09:54.517 [2024-10-07 14:20:18.032348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030500 is same with the state(6) to be set 00:09:54.517 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.517 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:54.517 Initializing NVMe Controllers 00:09:54.517 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:54.517 Controller IO queue size 128, less than required. 00:09:54.517 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:54.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:54.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:54.517 Initialization complete. Launching workers. 00:09:54.517 ======================================================== 00:09:54.517 Latency(us) 00:09:54.517 Device Information : IOPS MiB/s Average min max 00:09:54.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.91 0.08 908778.51 553.60 1005945.04 00:09:54.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.42 0.08 912834.66 436.04 1010544.56 00:09:54.517 ======================================================== 00:09:54.517 Total : 326.33 0.16 910797.30 436.04 1010544.56 00:09:54.517 00:09:54.517 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2811374 00:09:54.517 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:54.517 [2024-10-07 14:20:18.033413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000025d80 (9): Bad file descriptor 00:09:54.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2811374 00:09:55.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2811374) - No such process 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2811374 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2811374 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2811374 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:55.100 [2024-10-07 14:20:18.561118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2812218 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2812218 00:09:55.100 14:20:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:55.100 [2024-10-07 14:20:18.669449] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:55.671 14:20:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:55.671 14:20:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2812218 00:09:55.671 14:20:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:55.932 14:20:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:55.932 14:20:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2812218 00:09:55.932 14:20:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:56.504 14:20:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:56.504 14:20:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2812218 00:09:56.504 14:20:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:57.075 14:20:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:57.075 14:20:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2812218 00:09:57.075 14:20:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:57.646 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:57.646 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2812218 00:09:57.646 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:57.908 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:57.908 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2812218 00:09:57.908 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:58.479 Initializing NVMe Controllers 00:09:58.479 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:58.479 Controller IO queue size 128, less than required. 00:09:58.479 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:58.479 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:58.479 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:58.479 Initialization complete. Launching workers. 00:09:58.479 ======================================================== 00:09:58.479 Latency(us) 00:09:58.479 Device Information : IOPS MiB/s Average min max 00:09:58.479 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002449.84 1000151.22 1041149.40 00:09:58.479 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003093.13 1000202.47 1009969.81 00:09:58.479 ======================================================== 00:09:58.479 Total : 256.00 0.12 1002771.48 1000151.22 1041149.40 00:09:58.479 00:09:58.479 [2024-10-07 14:20:21.895917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(6) to be set 00:09:58.479 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:58.479 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2812218 00:09:58.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2812218) - No such process 00:09:58.479 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2812218 00:09:58.479 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:58.479 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:58.479 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:58.479 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:09:58.479 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:58.479 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:09:58.479 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.479 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:58.479 rmmod nvme_tcp 00:09:58.479 rmmod nvme_fabrics 00:09:58.479 rmmod nvme_keyring 00:09:58.741 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.741 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:09:58.741 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:09:58.741 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 2811236 ']' 00:09:58.741 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 2811236 00:09:58.741 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2811236 ']' 00:09:58.741 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2811236 00:09:58.741 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:09:58.741 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:58.741 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2811236 00:09:58.741 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:58.741 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:58.741 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2811236' 00:09:58.741 killing process with pid 2811236 00:09:58.741 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2811236 00:09:58.741 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2811236 00:09:59.684 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:59.684 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:59.684 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:59.684 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:09:59.684 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:09:59.684 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:59.684 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:09:59.684 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.684 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:59.684 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.684 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.684 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.597 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:01.597 00:10:01.597 real 0m18.967s 00:10:01.597 user 0m31.545s 00:10:01.597 sys 0m6.836s 00:10:01.597 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.597 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:01.597 ************************************ 00:10:01.597 END TEST nvmf_delete_subsystem 00:10:01.597 ************************************ 00:10:01.597 14:20:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:01.597 14:20:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:01.597 14:20:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.597 14:20:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.859 ************************************ 00:10:01.859 START TEST nvmf_host_management 00:10:01.859 ************************************ 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:01.859 * Looking for test storage... 00:10:01.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:01.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.859 --rc genhtml_branch_coverage=1 00:10:01.859 --rc genhtml_function_coverage=1 00:10:01.859 --rc genhtml_legend=1 00:10:01.859 --rc geninfo_all_blocks=1 00:10:01.859 --rc geninfo_unexecuted_blocks=1 00:10:01.859 00:10:01.859 ' 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:01.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.859 --rc genhtml_branch_coverage=1 00:10:01.859 --rc genhtml_function_coverage=1 00:10:01.859 --rc genhtml_legend=1 00:10:01.859 --rc geninfo_all_blocks=1 00:10:01.859 --rc geninfo_unexecuted_blocks=1 00:10:01.859 00:10:01.859 ' 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:01.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.859 --rc genhtml_branch_coverage=1 00:10:01.859 --rc genhtml_function_coverage=1 00:10:01.859 --rc genhtml_legend=1 00:10:01.859 --rc geninfo_all_blocks=1 00:10:01.859 --rc geninfo_unexecuted_blocks=1 00:10:01.859 00:10:01.859 ' 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:01.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.859 --rc genhtml_branch_coverage=1 00:10:01.859 --rc genhtml_function_coverage=1 00:10:01.859 --rc genhtml_legend=1 00:10:01.859 --rc geninfo_all_blocks=1 00:10:01.859 --rc geninfo_unexecuted_blocks=1 00:10:01.859 00:10:01.859 ' 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.859 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:01.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:10:01.860 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:10.001 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:10.001 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:10.001 Found net devices under 0000:31:00.0: cvl_0_0 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:10.001 Found net devices under 0000:31:00.1: cvl_0_1 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:10.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:10:10.001 00:10:10.001 --- 10.0.0.2 ping statistics --- 00:10:10.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.001 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:10.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:10:10.001 00:10:10.001 --- 10.0.0.1 ping statistics --- 00:10:10.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.001 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:10.001 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.002 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:10.002 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:10.002 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:10.002 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:10.002 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:10.002 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:10.002 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:10.002 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.002 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=2817357 00:10:10.002 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 2817357 00:10:10.002 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:10.002 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2817357 ']' 00:10:10.002 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.002 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:10.002 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.002 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:10.002 14:20:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.002 [2024-10-07 14:20:33.065391] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:10.002 [2024-10-07 14:20:33.065519] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.002 [2024-10-07 14:20:33.226317] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:10.002 [2024-10-07 14:20:33.461991] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.002 [2024-10-07 14:20:33.462075] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.002 [2024-10-07 14:20:33.462089] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.002 [2024-10-07 14:20:33.462103] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.002 [2024-10-07 14:20:33.462113] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.002 [2024-10-07 14:20:33.464803] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.002 [2024-10-07 14:20:33.464954] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.002 [2024-10-07 14:20:33.465080] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.002 [2024-10-07 14:20:33.465101] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:10:10.262 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:10.262 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:10:10.262 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:10.262 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:10.262 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.262 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.262 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:10.262 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.262 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.262 [2024-10-07 14:20:33.885439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:10.263 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.263 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:10.263 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:10.263 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.263 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:10.263 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:10.263 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:10.263 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.263 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.263 Malloc0 00:10:10.523 [2024-10-07 14:20:33.987834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.523 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.523 14:20:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:10.523 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:10.523 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.523 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2817723 00:10:10.523 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2817723 /var/tmp/bdevperf.sock 00:10:10.523 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2817723 ']' 00:10:10.523 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:10.523 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:10.523 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:10.523 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:10.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:10.524 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:10.524 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:10.524 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:10.524 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:10:10.524 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:10:10.524 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:10.524 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:10.524 { 00:10:10.524 "params": { 00:10:10.524 "name": "Nvme$subsystem", 00:10:10.524 "trtype": "$TEST_TRANSPORT", 00:10:10.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:10.524 "adrfam": "ipv4", 00:10:10.524 "trsvcid": "$NVMF_PORT", 00:10:10.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:10.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:10.524 "hdgst": ${hdgst:-false}, 00:10:10.524 "ddgst": ${ddgst:-false} 00:10:10.524 }, 00:10:10.524 "method": "bdev_nvme_attach_controller" 00:10:10.524 } 00:10:10.524 EOF 00:10:10.524 )") 00:10:10.524 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:10:10.524 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:10:10.524 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:10:10.524 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:10.524 "params": { 00:10:10.524 "name": "Nvme0", 00:10:10.524 "trtype": "tcp", 00:10:10.524 "traddr": "10.0.0.2", 00:10:10.524 "adrfam": "ipv4", 00:10:10.524 "trsvcid": "4420", 00:10:10.524 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:10.524 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:10.524 "hdgst": false, 00:10:10.524 "ddgst": false 00:10:10.524 }, 00:10:10.524 "method": "bdev_nvme_attach_controller" 00:10:10.524 }' 00:10:10.524 [2024-10-07 14:20:34.121428] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:10.524 [2024-10-07 14:20:34.121522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2817723 ] 00:10:10.784 [2024-10-07 14:20:34.237204] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.784 [2024-10-07 14:20:34.418297] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.356 Running I/O for 10 seconds... 00:10:11.356 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.356 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:10:11.356 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:11.356 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.356 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:11.356 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.356 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:11.356 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:11.356 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:11.356 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:11.356 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:11.356 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:11.356 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:11.356 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:11.356 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:11.357 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:11.357 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.357 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:11.357 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.357 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:10:11.357 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:10:11.357 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:10:11.619 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:10:11.619 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:11.619 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:11.619 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:11.619 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.619 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:11.619 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.619 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:10:11.619 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:10:11.619 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:11.619 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:11.619 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:11.619 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:11.619 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.619 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:11.619 [2024-10-07 14:20:35.287890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:11.619 [2024-10-07 14:20:35.287944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:11.619 [2024-10-07 14:20:35.288439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.288986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.288999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.620 [2024-10-07 14:20:35.289735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.620 [2024-10-07 14:20:35.289745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.621 [2024-10-07 14:20:35.289758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.621 [2024-10-07 14:20:35.289768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.621 [2024-10-07 14:20:35.289781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.621 [2024-10-07 14:20:35.289791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.621 [2024-10-07 14:20:35.289803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.621 [2024-10-07 14:20:35.289813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.621 [2024-10-07 14:20:35.289826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.621 [2024-10-07 14:20:35.289836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.621 [2024-10-07 14:20:35.289849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.621 [2024-10-07 14:20:35.289859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.621 [2024-10-07 14:20:35.289873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.621 [2024-10-07 14:20:35.289884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.621 [2024-10-07 14:20:35.289896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.621 [2024-10-07 14:20:35.289906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.621 [2024-10-07 14:20:35.289919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.621 [2024-10-07 14:20:35.289929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.621 [2024-10-07 14:20:35.289942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.621 [2024-10-07 14:20:35.289952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.621 [2024-10-07 14:20:35.289965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.621 [2024-10-07 14:20:35.289975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.621 [2024-10-07 14:20:35.289988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.621 [2024-10-07 14:20:35.289998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:11.621 [2024-10-07 14:20:35.290014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039f100 is same with the state(6) to be set 00:10:11.621 [2024-10-07 14:20:35.290225] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500039f100 was disconnected and freed. reset controller. 00:10:11.621 [2024-10-07 14:20:35.291489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:11.621 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.621 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:11.621 task offset: 74880 on job bdev=Nvme0n1 fails 00:10:11.621 00:10:11.621 Latency(us) 00:10:11.621 [2024-10-07T12:20:35.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.621 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:11.621 Job: Nvme0n1 ended in about 0.42 seconds with error 00:10:11.621 Verification LBA range: start 0x0 length 0x400 00:10:11.621 Nvme0n1 : 0.42 1370.30 85.64 152.26 0.00 40730.91 2648.75 34734.08 00:10:11.621 [2024-10-07T12:20:35.330Z] =================================================================================================================== 00:10:11.621 [2024-10-07T12:20:35.330Z] Total : 1370.30 85.64 152.26 0.00 40730.91 2648.75 34734.08 00:10:11.621 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.621 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:11.621 [2024-10-07 14:20:35.295790] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:11.621 [2024-10-07 14:20:35.295826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:10:11.621 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.621 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:11.882 [2024-10-07 14:20:35.348574] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:12.824 14:20:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2817723 00:10:12.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2817723) - No such process 00:10:12.824 14:20:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:12.824 14:20:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:12.824 14:20:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:12.824 14:20:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:12.824 14:20:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:10:12.824 14:20:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:10:12.824 14:20:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:12.824 14:20:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:12.824 { 00:10:12.824 "params": { 00:10:12.824 "name": "Nvme$subsystem", 00:10:12.824 "trtype": "$TEST_TRANSPORT", 00:10:12.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:12.824 "adrfam": "ipv4", 00:10:12.824 "trsvcid": "$NVMF_PORT", 00:10:12.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:12.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:12.824 "hdgst": ${hdgst:-false}, 00:10:12.824 "ddgst": ${ddgst:-false} 00:10:12.824 }, 00:10:12.824 "method": "bdev_nvme_attach_controller" 00:10:12.824 } 00:10:12.824 EOF 00:10:12.824 )") 00:10:12.824 14:20:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:10:12.824 14:20:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:10:12.824 14:20:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:10:12.824 14:20:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:12.824 "params": { 00:10:12.824 "name": "Nvme0", 00:10:12.824 "trtype": "tcp", 00:10:12.824 "traddr": "10.0.0.2", 00:10:12.824 "adrfam": "ipv4", 00:10:12.824 "trsvcid": "4420", 00:10:12.824 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:12.824 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:12.824 "hdgst": false, 00:10:12.824 "ddgst": false 00:10:12.824 }, 00:10:12.824 "method": "bdev_nvme_attach_controller" 00:10:12.824 }' 00:10:12.824 [2024-10-07 14:20:36.392288] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:12.824 [2024-10-07 14:20:36.392393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2818094 ] 00:10:12.824 [2024-10-07 14:20:36.510778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.084 [2024-10-07 14:20:36.691175] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.657 Running I/O for 1 seconds... 00:10:14.599 1679.00 IOPS, 104.94 MiB/s 00:10:14.599 Latency(us) 00:10:14.599 [2024-10-07T12:20:38.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.599 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:14.599 Verification LBA range: start 0x0 length 0x400 00:10:14.599 Nvme0n1 : 1.04 1722.52 107.66 0.00 0.00 36476.04 6526.29 33204.91 00:10:14.599 [2024-10-07T12:20:38.308Z] =================================================================================================================== 00:10:14.599 [2024-10-07T12:20:38.308Z] Total : 1722.52 107.66 0.00 0.00 36476.04 6526.29 33204.91 00:10:15.171 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:15.171 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:15.171 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:15.171 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:15.171 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:15.171 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:15.171 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:15.171 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:15.171 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:15.171 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:15.171 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:15.171 rmmod nvme_tcp 00:10:15.171 rmmod nvme_fabrics 00:10:15.171 rmmod nvme_keyring 00:10:15.432 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:15.432 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:15.432 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:15.432 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 2817357 ']' 00:10:15.432 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 2817357 00:10:15.432 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2817357 ']' 00:10:15.432 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2817357 00:10:15.432 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:10:15.432 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.432 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2817357 00:10:15.432 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:15.432 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:15.432 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2817357' 00:10:15.432 killing process with pid 2817357 00:10:15.432 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2817357 00:10:15.432 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2817357 00:10:16.005 [2024-10-07 14:20:39.642584] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:16.266 14:20:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:16.266 14:20:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:16.266 14:20:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:16.266 14:20:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:16.266 14:20:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:10:16.266 14:20:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:16.266 14:20:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:10:16.266 14:20:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.266 14:20:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:16.266 14:20:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.266 14:20:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.266 14:20:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.179 14:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:18.179 14:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:18.179 00:10:18.179 real 0m16.477s 00:10:18.179 user 0m30.570s 00:10:18.179 sys 0m6.976s 00:10:18.179 14:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.179 14:20:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:18.179 ************************************ 00:10:18.180 END TEST nvmf_host_management 00:10:18.180 ************************************ 00:10:18.180 14:20:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:18.180 14:20:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:18.180 14:20:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.180 14:20:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:18.180 ************************************ 00:10:18.180 START TEST nvmf_lvol 00:10:18.180 ************************************ 00:10:18.180 14:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:18.442 * Looking for test storage... 00:10:18.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:18.442 14:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:18.442 14:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:10:18.442 14:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:18.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.442 --rc genhtml_branch_coverage=1 00:10:18.442 --rc genhtml_function_coverage=1 00:10:18.442 --rc genhtml_legend=1 00:10:18.442 --rc geninfo_all_blocks=1 00:10:18.442 --rc geninfo_unexecuted_blocks=1 00:10:18.442 00:10:18.442 ' 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:18.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.442 --rc genhtml_branch_coverage=1 00:10:18.442 --rc genhtml_function_coverage=1 00:10:18.442 --rc genhtml_legend=1 00:10:18.442 --rc geninfo_all_blocks=1 00:10:18.442 --rc geninfo_unexecuted_blocks=1 00:10:18.442 00:10:18.442 ' 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:18.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.442 --rc genhtml_branch_coverage=1 00:10:18.442 --rc genhtml_function_coverage=1 00:10:18.442 --rc genhtml_legend=1 00:10:18.442 --rc geninfo_all_blocks=1 00:10:18.442 --rc geninfo_unexecuted_blocks=1 00:10:18.442 00:10:18.442 ' 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:18.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.442 --rc genhtml_branch_coverage=1 00:10:18.442 --rc genhtml_function_coverage=1 00:10:18.442 --rc genhtml_legend=1 00:10:18.442 --rc geninfo_all_blocks=1 00:10:18.442 --rc geninfo_unexecuted_blocks=1 00:10:18.442 00:10:18.442 ' 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.442 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:10:18.443 14:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:26.593 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:26.593 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.593 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:26.594 Found net devices under 0000:31:00.0: cvl_0_0 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:26.594 Found net devices under 0000:31:00.1: cvl_0_1 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:26.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:10:26.594 00:10:26.594 --- 10.0.0.2 ping statistics --- 00:10:26.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.594 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:10:26.594 00:10:26.594 --- 10.0.0.1 ping statistics --- 00:10:26.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.594 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=2823171 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 2823171 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2823171 ']' 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:26.594 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:26.594 [2024-10-07 14:20:49.605993] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:26.594 [2024-10-07 14:20:49.606105] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.594 [2024-10-07 14:20:49.732096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:26.594 [2024-10-07 14:20:49.910554] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.594 [2024-10-07 14:20:49.910605] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.594 [2024-10-07 14:20:49.910617] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.594 [2024-10-07 14:20:49.910629] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.594 [2024-10-07 14:20:49.910638] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.594 [2024-10-07 14:20:49.912381] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.594 [2024-10-07 14:20:49.912463] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.594 [2024-10-07 14:20:49.912464] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.855 14:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:26.855 14:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:10:26.855 14:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:26.855 14:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:26.855 14:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:26.855 14:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.855 14:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:27.115 [2024-10-07 14:20:50.568422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.115 14:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:27.376 14:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:27.376 14:20:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:27.376 14:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:27.376 14:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:27.637 14:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:27.898 14:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a770e1f5-1bd0-4ea5-a97d-b2d9f22da45a 00:10:27.898 14:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a770e1f5-1bd0-4ea5-a97d-b2d9f22da45a lvol 20 00:10:28.158 14:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ac83cabf-f615-41b9-9423-37191a412129 00:10:28.159 14:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:28.159 14:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ac83cabf-f615-41b9-9423-37191a412129 00:10:28.419 14:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:28.680 [2024-10-07 14:20:52.157169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.680 14:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:28.680 14:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2823600 00:10:28.680 14:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:28.680 14:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:30.066 14:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ac83cabf-f615-41b9-9423-37191a412129 MY_SNAPSHOT 00:10:30.066 14:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=af5ca37f-1fb2-46c9-b393-3b3ffe73859c 00:10:30.066 14:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ac83cabf-f615-41b9-9423-37191a412129 30 00:10:30.328 14:20:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone af5ca37f-1fb2-46c9-b393-3b3ffe73859c MY_CLONE 00:10:30.328 14:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=19dbb46e-4b60-4d92-bb84-798ab15d9212 00:10:30.328 14:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 19dbb46e-4b60-4d92-bb84-798ab15d9212 00:10:30.900 14:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2823600 00:10:39.136 Initializing NVMe Controllers 00:10:39.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:39.136 Controller IO queue size 128, less than required. 00:10:39.136 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:39.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:39.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:39.136 Initialization complete. Launching workers. 00:10:39.136 ======================================================== 00:10:39.136 Latency(us) 00:10:39.136 Device Information : IOPS MiB/s Average min max 00:10:39.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16380.20 63.99 7816.17 570.58 109174.94 00:10:39.136 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11382.30 44.46 11249.82 4111.02 103161.39 00:10:39.136 ======================================================== 00:10:39.136 Total : 27762.50 108.45 9223.93 570.58 109174.94 00:10:39.136 00:10:39.136 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:39.442 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ac83cabf-f615-41b9-9423-37191a412129 00:10:39.443 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a770e1f5-1bd0-4ea5-a97d-b2d9f22da45a 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:39.718 rmmod nvme_tcp 00:10:39.718 rmmod nvme_fabrics 00:10:39.718 rmmod nvme_keyring 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 2823171 ']' 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 2823171 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2823171 ']' 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2823171 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2823171 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2823171' 00:10:39.718 killing process with pid 2823171 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2823171 00:10:39.718 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2823171 00:10:41.103 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:41.103 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:41.103 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:41.103 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:41.103 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:10:41.103 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:41.103 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:10:41.103 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.103 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.103 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.103 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.103 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.018 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.018 00:10:43.018 real 0m24.714s 00:10:43.018 user 1m6.069s 00:10:43.018 sys 0m8.596s 00:10:43.018 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.018 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:43.018 ************************************ 00:10:43.018 END TEST nvmf_lvol 00:10:43.018 ************************************ 00:10:43.018 14:21:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:43.018 14:21:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:43.018 14:21:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.018 14:21:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.018 ************************************ 00:10:43.018 START TEST nvmf_lvs_grow 00:10:43.019 ************************************ 00:10:43.019 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:43.281 * Looking for test storage... 00:10:43.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:43.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.281 --rc genhtml_branch_coverage=1 00:10:43.281 --rc genhtml_function_coverage=1 00:10:43.281 --rc genhtml_legend=1 00:10:43.281 --rc geninfo_all_blocks=1 00:10:43.281 --rc geninfo_unexecuted_blocks=1 00:10:43.281 00:10:43.281 ' 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:43.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.281 --rc genhtml_branch_coverage=1 00:10:43.281 --rc genhtml_function_coverage=1 00:10:43.281 --rc genhtml_legend=1 00:10:43.281 --rc geninfo_all_blocks=1 00:10:43.281 --rc geninfo_unexecuted_blocks=1 00:10:43.281 00:10:43.281 ' 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:43.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.281 --rc genhtml_branch_coverage=1 00:10:43.281 --rc genhtml_function_coverage=1 00:10:43.281 --rc genhtml_legend=1 00:10:43.281 --rc geninfo_all_blocks=1 00:10:43.281 --rc geninfo_unexecuted_blocks=1 00:10:43.281 00:10:43.281 ' 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:43.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.281 --rc genhtml_branch_coverage=1 00:10:43.281 --rc genhtml_function_coverage=1 00:10:43.281 --rc genhtml_legend=1 00:10:43.281 --rc geninfo_all_blocks=1 00:10:43.281 --rc geninfo_unexecuted_blocks=1 00:10:43.281 00:10:43.281 ' 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:43.281 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.282 14:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:51.427 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.427 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:10:51.427 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:51.427 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:51.427 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:51.427 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:51.427 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:51.427 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:10:51.427 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:51.427 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:10:51.427 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:10:51.427 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:10:51.427 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:10:51.427 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:51.428 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:51.428 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:51.428 Found net devices under 0000:31:00.0: cvl_0_0 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:51.428 Found net devices under 0000:31:00.1: cvl_0_1 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:51.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:10:51.428 00:10:51.428 --- 10.0.0.2 ping statistics --- 00:10:51.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.428 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:10:51.428 00:10:51.428 --- 10.0.0.1 ping statistics --- 00:10:51.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.428 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=2830876 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 2830876 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2830876 ']' 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:51.428 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.429 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:51.429 14:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:51.429 [2024-10-07 14:21:14.526952] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:51.429 [2024-10-07 14:21:14.527068] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.429 [2024-10-07 14:21:14.667081] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.429 [2024-10-07 14:21:14.847836] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.429 [2024-10-07 14:21:14.847891] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.429 [2024-10-07 14:21:14.847903] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.429 [2024-10-07 14:21:14.847916] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.429 [2024-10-07 14:21:14.847925] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.429 [2024-10-07 14:21:14.849141] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.689 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:51.689 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:10:51.689 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:51.689 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:51.689 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:51.689 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.689 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:51.950 [2024-10-07 14:21:15.475999] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.950 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:51.950 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:51.950 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:51.950 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:51.950 ************************************ 00:10:51.950 START TEST lvs_grow_clean 00:10:51.950 ************************************ 00:10:51.950 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:10:51.950 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:51.950 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:51.950 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:51.950 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:51.950 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:51.950 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:51.950 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:51.950 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:51.950 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:52.211 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:52.211 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:52.471 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7d33eac2-9a98-426b-bddd-d607f92cbfbf 00:10:52.471 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d33eac2-9a98-426b-bddd-d607f92cbfbf 00:10:52.471 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:52.471 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:52.471 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:52.471 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7d33eac2-9a98-426b-bddd-d607f92cbfbf lvol 150 00:10:52.732 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=37f57482-32bf-48ec-a6c8-6d07633c2fb1 00:10:52.732 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:52.732 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:52.732 [2024-10-07 14:21:16.410445] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:52.732 [2024-10-07 14:21:16.410521] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:52.732 true 00:10:52.732 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d33eac2-9a98-426b-bddd-d607f92cbfbf 00:10:52.732 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:52.993 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:52.993 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:53.253 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 37f57482-32bf-48ec-a6c8-6d07633c2fb1 00:10:53.253 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:53.514 [2024-10-07 14:21:17.080661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.514 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:53.774 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2831587 00:10:53.774 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:53.774 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:53.774 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2831587 /var/tmp/bdevperf.sock 00:10:53.774 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2831587 ']' 00:10:53.774 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:53.774 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.774 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:53.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:53.774 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.774 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:53.774 [2024-10-07 14:21:17.362287] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:53.774 [2024-10-07 14:21:17.362393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831587 ] 00:10:54.034 [2024-10-07 14:21:17.493259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.034 [2024-10-07 14:21:17.670661] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.608 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.608 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:10:54.608 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:54.868 Nvme0n1 00:10:54.868 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:54.868 [ 00:10:54.868 { 00:10:54.868 "name": "Nvme0n1", 00:10:54.868 "aliases": [ 00:10:54.868 "37f57482-32bf-48ec-a6c8-6d07633c2fb1" 00:10:54.868 ], 00:10:54.868 "product_name": "NVMe disk", 00:10:54.868 "block_size": 4096, 00:10:54.868 "num_blocks": 38912, 00:10:54.868 "uuid": "37f57482-32bf-48ec-a6c8-6d07633c2fb1", 00:10:54.868 "numa_id": 0, 00:10:54.868 "assigned_rate_limits": { 00:10:54.868 "rw_ios_per_sec": 0, 00:10:54.868 "rw_mbytes_per_sec": 0, 00:10:54.868 "r_mbytes_per_sec": 0, 00:10:54.868 "w_mbytes_per_sec": 0 00:10:54.868 }, 00:10:54.868 "claimed": false, 00:10:54.868 "zoned": false, 00:10:54.868 "supported_io_types": { 00:10:54.868 "read": true, 00:10:54.868 "write": true, 00:10:54.868 "unmap": true, 00:10:54.868 "flush": true, 00:10:54.868 "reset": true, 00:10:54.868 "nvme_admin": true, 00:10:54.868 "nvme_io": true, 00:10:54.868 "nvme_io_md": false, 00:10:54.868 "write_zeroes": true, 00:10:54.868 "zcopy": false, 00:10:54.868 "get_zone_info": false, 00:10:54.868 "zone_management": false, 00:10:54.868 "zone_append": false, 00:10:54.868 "compare": true, 00:10:54.868 "compare_and_write": true, 00:10:54.868 "abort": true, 00:10:54.868 "seek_hole": false, 00:10:54.868 "seek_data": false, 00:10:54.868 "copy": true, 00:10:54.868 "nvme_iov_md": false 00:10:54.868 }, 00:10:54.868 "memory_domains": [ 00:10:54.868 { 00:10:54.868 "dma_device_id": "system", 00:10:54.868 "dma_device_type": 1 00:10:54.868 } 00:10:54.868 ], 00:10:54.868 "driver_specific": { 00:10:54.868 "nvme": [ 00:10:54.868 { 00:10:54.868 "trid": { 00:10:54.868 "trtype": "TCP", 00:10:54.868 "adrfam": "IPv4", 00:10:54.868 "traddr": "10.0.0.2", 00:10:54.868 "trsvcid": "4420", 00:10:54.868 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:54.868 }, 00:10:54.868 "ctrlr_data": { 00:10:54.868 "cntlid": 1, 00:10:54.868 "vendor_id": "0x8086", 00:10:54.868 "model_number": "SPDK bdev Controller", 00:10:54.868 "serial_number": "SPDK0", 00:10:54.868 "firmware_revision": "25.01", 00:10:54.868 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:54.868 "oacs": { 00:10:54.868 "security": 0, 00:10:54.868 "format": 0, 00:10:54.868 "firmware": 0, 00:10:54.868 "ns_manage": 0 00:10:54.868 }, 00:10:54.868 "multi_ctrlr": true, 00:10:54.868 "ana_reporting": false 00:10:54.868 }, 00:10:54.868 "vs": { 00:10:54.868 "nvme_version": "1.3" 00:10:54.868 }, 00:10:54.868 "ns_data": { 00:10:54.868 "id": 1, 00:10:54.868 "can_share": true 00:10:54.868 } 00:10:54.868 } 00:10:54.868 ], 00:10:54.868 "mp_policy": "active_passive" 00:10:54.868 } 00:10:54.868 } 00:10:54.868 ] 00:10:54.868 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2831908 00:10:54.868 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:54.869 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:55.129 Running I/O for 10 seconds... 00:10:56.072 Latency(us) 00:10:56.072 [2024-10-07T12:21:19.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:56.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:56.072 Nvme0n1 : 1.00 16132.00 63.02 0.00 0.00 0.00 0.00 0.00 00:10:56.072 [2024-10-07T12:21:19.781Z] =================================================================================================================== 00:10:56.072 [2024-10-07T12:21:19.781Z] Total : 16132.00 63.02 0.00 0.00 0.00 0.00 0.00 00:10:56.072 00:10:57.015 14:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7d33eac2-9a98-426b-bddd-d607f92cbfbf 00:10:57.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.015 Nvme0n1 : 2.00 16202.50 63.29 0.00 0.00 0.00 0.00 0.00 00:10:57.015 [2024-10-07T12:21:20.724Z] =================================================================================================================== 00:10:57.015 [2024-10-07T12:21:20.724Z] Total : 16202.50 63.29 0.00 0.00 0.00 0.00 0.00 00:10:57.015 00:10:57.015 true 00:10:57.275 14:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d33eac2-9a98-426b-bddd-d607f92cbfbf 00:10:57.275 14:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:57.275 14:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:57.275 14:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:57.275 14:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2831908 00:10:58.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.216 Nvme0n1 : 3.00 16173.67 63.18 0.00 0.00 0.00 0.00 0.00 00:10:58.216 [2024-10-07T12:21:21.925Z] =================================================================================================================== 00:10:58.216 [2024-10-07T12:21:21.925Z] Total : 16173.67 63.18 0.00 0.00 0.00 0.00 0.00 00:10:58.216 00:10:59.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.158 Nvme0n1 : 4.00 16217.75 63.35 0.00 0.00 0.00 0.00 0.00 00:10:59.158 [2024-10-07T12:21:22.867Z] =================================================================================================================== 00:10:59.158 [2024-10-07T12:21:22.867Z] Total : 16217.75 63.35 0.00 0.00 0.00 0.00 0.00 00:10:59.158 00:11:00.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.100 Nvme0n1 : 5.00 16258.00 63.51 0.00 0.00 0.00 0.00 0.00 00:11:00.100 [2024-10-07T12:21:23.809Z] =================================================================================================================== 00:11:00.100 [2024-10-07T12:21:23.809Z] Total : 16258.00 63.51 0.00 0.00 0.00 0.00 0.00 00:11:00.100 00:11:01.042 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.042 Nvme0n1 : 6.00 16284.00 63.61 0.00 0.00 0.00 0.00 0.00 00:11:01.042 [2024-10-07T12:21:24.751Z] =================================================================================================================== 00:11:01.042 [2024-10-07T12:21:24.751Z] Total : 16284.00 63.61 0.00 0.00 0.00 0.00 0.00 00:11:01.042 00:11:01.984 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.984 Nvme0n1 : 7.00 16310.71 63.71 0.00 0.00 0.00 0.00 0.00 00:11:01.984 [2024-10-07T12:21:25.693Z] =================================================================================================================== 00:11:01.984 [2024-10-07T12:21:25.693Z] Total : 16310.71 63.71 0.00 0.00 0.00 0.00 0.00 00:11:01.984 00:11:02.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:02.925 Nvme0n1 : 8.00 16317.50 63.74 0.00 0.00 0.00 0.00 0.00 00:11:02.925 [2024-10-07T12:21:26.634Z] =================================================================================================================== 00:11:02.925 [2024-10-07T12:21:26.634Z] Total : 16317.50 63.74 0.00 0.00 0.00 0.00 0.00 00:11:02.925 00:11:04.316 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:04.316 Nvme0n1 : 9.00 16341.22 63.83 0.00 0.00 0.00 0.00 0.00 00:11:04.316 [2024-10-07T12:21:28.025Z] =================================================================================================================== 00:11:04.316 [2024-10-07T12:21:28.025Z] Total : 16341.22 63.83 0.00 0.00 0.00 0.00 0.00 00:11:04.316 00:11:05.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:05.306 Nvme0n1 : 10.00 16349.60 63.87 0.00 0.00 0.00 0.00 0.00 00:11:05.306 [2024-10-07T12:21:29.015Z] =================================================================================================================== 00:11:05.306 [2024-10-07T12:21:29.015Z] Total : 16349.60 63.87 0.00 0.00 0.00 0.00 0.00 00:11:05.306 00:11:05.306 00:11:05.306 Latency(us) 00:11:05.306 [2024-10-07T12:21:29.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:05.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:05.306 Nvme0n1 : 10.00 16352.23 63.88 0.00 0.00 7823.86 2389.33 23592.96 00:11:05.306 [2024-10-07T12:21:29.015Z] =================================================================================================================== 00:11:05.306 [2024-10-07T12:21:29.015Z] Total : 16352.23 63.88 0.00 0.00 7823.86 2389.33 23592.96 00:11:05.306 { 00:11:05.306 "results": [ 00:11:05.306 { 00:11:05.306 "job": "Nvme0n1", 00:11:05.306 "core_mask": "0x2", 00:11:05.306 "workload": "randwrite", 00:11:05.306 "status": "finished", 00:11:05.306 "queue_depth": 128, 00:11:05.306 "io_size": 4096, 00:11:05.306 "runtime": 10.002303, 00:11:05.306 "iops": 16352.234080491264, 00:11:05.306 "mibps": 63.875914376919, 00:11:05.306 "io_failed": 0, 00:11:05.306 "io_timeout": 0, 00:11:05.306 "avg_latency_us": 7823.859854243093, 00:11:05.306 "min_latency_us": 2389.3333333333335, 00:11:05.306 "max_latency_us": 23592.96 00:11:05.306 } 00:11:05.306 ], 00:11:05.306 "core_count": 1 00:11:05.306 } 00:11:05.306 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2831587 00:11:05.306 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2831587 ']' 00:11:05.306 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2831587 00:11:05.306 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:11:05.306 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:05.306 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2831587 00:11:05.306 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:05.306 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:05.306 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2831587' 00:11:05.306 killing process with pid 2831587 00:11:05.306 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2831587 00:11:05.306 Received shutdown signal, test time was about 10.000000 seconds 00:11:05.306 00:11:05.306 Latency(us) 00:11:05.306 [2024-10-07T12:21:29.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:05.306 [2024-10-07T12:21:29.015Z] =================================================================================================================== 00:11:05.306 [2024-10-07T12:21:29.015Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:05.306 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2831587 00:11:05.565 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:05.826 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:06.087 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d33eac2-9a98-426b-bddd-d607f92cbfbf 00:11:06.087 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:06.087 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:06.087 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:06.087 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:06.348 [2024-10-07 14:21:29.905826] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:06.348 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d33eac2-9a98-426b-bddd-d607f92cbfbf 00:11:06.348 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:11:06.348 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d33eac2-9a98-426b-bddd-d607f92cbfbf 00:11:06.348 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:06.348 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:06.348 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:06.348 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:06.348 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:06.348 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:06.348 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:06.348 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:06.348 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d33eac2-9a98-426b-bddd-d607f92cbfbf 00:11:06.610 request: 00:11:06.610 { 00:11:06.610 "uuid": "7d33eac2-9a98-426b-bddd-d607f92cbfbf", 00:11:06.610 "method": "bdev_lvol_get_lvstores", 00:11:06.610 "req_id": 1 00:11:06.610 } 00:11:06.610 Got JSON-RPC error response 00:11:06.610 response: 00:11:06.610 { 00:11:06.610 "code": -19, 00:11:06.610 "message": "No such device" 00:11:06.610 } 00:11:06.610 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:11:06.610 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:06.610 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:06.610 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:06.610 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:06.610 aio_bdev 00:11:06.610 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 37f57482-32bf-48ec-a6c8-6d07633c2fb1 00:11:06.610 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=37f57482-32bf-48ec-a6c8-6d07633c2fb1 00:11:06.610 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:06.610 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:11:06.610 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:06.610 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:06.610 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:06.871 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 37f57482-32bf-48ec-a6c8-6d07633c2fb1 -t 2000 00:11:07.131 [ 00:11:07.131 { 00:11:07.131 "name": "37f57482-32bf-48ec-a6c8-6d07633c2fb1", 00:11:07.131 "aliases": [ 00:11:07.131 "lvs/lvol" 00:11:07.131 ], 00:11:07.131 "product_name": "Logical Volume", 00:11:07.131 "block_size": 4096, 00:11:07.131 "num_blocks": 38912, 00:11:07.131 "uuid": "37f57482-32bf-48ec-a6c8-6d07633c2fb1", 00:11:07.131 "assigned_rate_limits": { 00:11:07.131 "rw_ios_per_sec": 0, 00:11:07.131 "rw_mbytes_per_sec": 0, 00:11:07.131 "r_mbytes_per_sec": 0, 00:11:07.131 "w_mbytes_per_sec": 0 00:11:07.131 }, 00:11:07.131 "claimed": false, 00:11:07.131 "zoned": false, 00:11:07.131 "supported_io_types": { 00:11:07.131 "read": true, 00:11:07.131 "write": true, 00:11:07.131 "unmap": true, 00:11:07.131 "flush": false, 00:11:07.131 "reset": true, 00:11:07.131 "nvme_admin": false, 00:11:07.131 "nvme_io": false, 00:11:07.131 "nvme_io_md": false, 00:11:07.131 "write_zeroes": true, 00:11:07.131 "zcopy": false, 00:11:07.131 "get_zone_info": false, 00:11:07.131 "zone_management": false, 00:11:07.131 "zone_append": false, 00:11:07.131 "compare": false, 00:11:07.131 "compare_and_write": false, 00:11:07.131 "abort": false, 00:11:07.131 "seek_hole": true, 00:11:07.131 "seek_data": true, 00:11:07.131 "copy": false, 00:11:07.131 "nvme_iov_md": false 00:11:07.131 }, 00:11:07.131 "driver_specific": { 00:11:07.131 "lvol": { 00:11:07.131 "lvol_store_uuid": "7d33eac2-9a98-426b-bddd-d607f92cbfbf", 00:11:07.131 "base_bdev": "aio_bdev", 00:11:07.131 "thin_provision": false, 00:11:07.131 "num_allocated_clusters": 38, 00:11:07.131 "snapshot": false, 00:11:07.131 "clone": false, 00:11:07.131 "esnap_clone": false 00:11:07.131 } 00:11:07.131 } 00:11:07.131 } 00:11:07.131 ] 00:11:07.131 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:11:07.131 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d33eac2-9a98-426b-bddd-d607f92cbfbf 00:11:07.131 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:07.131 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:07.131 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d33eac2-9a98-426b-bddd-d607f92cbfbf 00:11:07.131 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:07.392 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:07.392 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 37f57482-32bf-48ec-a6c8-6d07633c2fb1 00:11:07.652 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7d33eac2-9a98-426b-bddd-d607f92cbfbf 00:11:07.652 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:07.913 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:07.913 00:11:07.913 real 0m15.952s 00:11:07.913 user 0m15.588s 00:11:07.913 sys 0m1.393s 00:11:07.913 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.913 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:07.913 ************************************ 00:11:07.913 END TEST lvs_grow_clean 00:11:07.913 ************************************ 00:11:07.913 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:07.913 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:07.913 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.913 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:07.913 ************************************ 00:11:07.913 START TEST lvs_grow_dirty 00:11:07.913 ************************************ 00:11:07.913 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:11:07.913 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:07.913 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:07.913 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:07.913 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:07.913 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:07.913 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:07.913 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:07.913 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:07.913 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:08.174 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:08.174 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:08.436 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=90860389-b9fd-4556-9c3d-fd9e9012b2a7 00:11:08.436 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90860389-b9fd-4556-9c3d-fd9e9012b2a7 00:11:08.436 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:08.697 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:08.697 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:08.697 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 90860389-b9fd-4556-9c3d-fd9e9012b2a7 lvol 150 00:11:08.697 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a63329be-5bce-4746-b0e5-1ed98a768dba 00:11:08.697 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:08.697 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:08.958 [2024-10-07 14:21:32.470488] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:08.958 [2024-10-07 14:21:32.470564] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:08.958 true 00:11:08.958 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90860389-b9fd-4556-9c3d-fd9e9012b2a7 00:11:08.958 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:08.958 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:08.958 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:09.218 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a63329be-5bce-4746-b0e5-1ed98a768dba 00:11:09.479 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:09.479 [2024-10-07 14:21:33.124622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.479 14:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:09.740 14:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2834711 00:11:09.740 14:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:09.740 14:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:09.740 14:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2834711 /var/tmp/bdevperf.sock 00:11:09.740 14:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2834711 ']' 00:11:09.740 14:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:09.740 14:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:09.740 14:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:09.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:09.740 14:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:09.740 14:21:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:09.740 [2024-10-07 14:21:33.381403] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:09.740 [2024-10-07 14:21:33.381515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834711 ] 00:11:10.000 [2024-10-07 14:21:33.512446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.000 [2024-10-07 14:21:33.691488] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.571 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:10.571 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:11:10.571 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:10.832 Nvme0n1 00:11:11.092 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:11.092 [ 00:11:11.092 { 00:11:11.092 "name": "Nvme0n1", 00:11:11.092 "aliases": [ 00:11:11.092 "a63329be-5bce-4746-b0e5-1ed98a768dba" 00:11:11.092 ], 00:11:11.092 "product_name": "NVMe disk", 00:11:11.092 "block_size": 4096, 00:11:11.092 "num_blocks": 38912, 00:11:11.092 "uuid": "a63329be-5bce-4746-b0e5-1ed98a768dba", 00:11:11.092 "numa_id": 0, 00:11:11.092 "assigned_rate_limits": { 00:11:11.092 "rw_ios_per_sec": 0, 00:11:11.093 "rw_mbytes_per_sec": 0, 00:11:11.093 "r_mbytes_per_sec": 0, 00:11:11.093 "w_mbytes_per_sec": 0 00:11:11.093 }, 00:11:11.093 "claimed": false, 00:11:11.093 "zoned": false, 00:11:11.093 "supported_io_types": { 00:11:11.093 "read": true, 00:11:11.093 "write": true, 00:11:11.093 "unmap": true, 00:11:11.093 "flush": true, 00:11:11.093 "reset": true, 00:11:11.093 "nvme_admin": true, 00:11:11.093 "nvme_io": true, 00:11:11.093 "nvme_io_md": false, 00:11:11.093 "write_zeroes": true, 00:11:11.093 "zcopy": false, 00:11:11.093 "get_zone_info": false, 00:11:11.093 "zone_management": false, 00:11:11.093 "zone_append": false, 00:11:11.093 "compare": true, 00:11:11.093 "compare_and_write": true, 00:11:11.093 "abort": true, 00:11:11.093 "seek_hole": false, 00:11:11.093 "seek_data": false, 00:11:11.093 "copy": true, 00:11:11.093 "nvme_iov_md": false 00:11:11.093 }, 00:11:11.093 "memory_domains": [ 00:11:11.093 { 00:11:11.093 "dma_device_id": "system", 00:11:11.093 "dma_device_type": 1 00:11:11.093 } 00:11:11.093 ], 00:11:11.093 "driver_specific": { 00:11:11.093 "nvme": [ 00:11:11.093 { 00:11:11.093 "trid": { 00:11:11.093 "trtype": "TCP", 00:11:11.093 "adrfam": "IPv4", 00:11:11.093 "traddr": "10.0.0.2", 00:11:11.093 "trsvcid": "4420", 00:11:11.093 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:11.093 }, 00:11:11.093 "ctrlr_data": { 00:11:11.093 "cntlid": 1, 00:11:11.093 "vendor_id": "0x8086", 00:11:11.093 "model_number": "SPDK bdev Controller", 00:11:11.093 "serial_number": "SPDK0", 00:11:11.093 "firmware_revision": "25.01", 00:11:11.093 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:11.093 "oacs": { 00:11:11.093 "security": 0, 00:11:11.093 "format": 0, 00:11:11.093 "firmware": 0, 00:11:11.093 "ns_manage": 0 00:11:11.093 }, 00:11:11.093 "multi_ctrlr": true, 00:11:11.093 "ana_reporting": false 00:11:11.093 }, 00:11:11.093 "vs": { 00:11:11.093 "nvme_version": "1.3" 00:11:11.093 }, 00:11:11.093 "ns_data": { 00:11:11.093 "id": 1, 00:11:11.093 "can_share": true 00:11:11.093 } 00:11:11.093 } 00:11:11.093 ], 00:11:11.093 "mp_policy": "active_passive" 00:11:11.093 } 00:11:11.093 } 00:11:11.093 ] 00:11:11.093 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2835026 00:11:11.093 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:11.093 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:11.353 Running I/O for 10 seconds... 00:11:12.310 Latency(us) 00:11:12.310 [2024-10-07T12:21:36.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:12.310 Nvme0n1 : 1.00 16059.00 62.73 0.00 0.00 0.00 0.00 0.00 00:11:12.310 [2024-10-07T12:21:36.019Z] =================================================================================================================== 00:11:12.310 [2024-10-07T12:21:36.019Z] Total : 16059.00 62.73 0.00 0.00 0.00 0.00 0.00 00:11:12.310 00:11:13.253 14:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 90860389-b9fd-4556-9c3d-fd9e9012b2a7 00:11:13.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:13.253 Nvme0n1 : 2.00 16211.50 63.33 0.00 0.00 0.00 0.00 0.00 00:11:13.253 [2024-10-07T12:21:36.962Z] =================================================================================================================== 00:11:13.253 [2024-10-07T12:21:36.962Z] Total : 16211.50 63.33 0.00 0.00 0.00 0.00 0.00 00:11:13.253 00:11:13.253 true 00:11:13.253 14:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90860389-b9fd-4556-9c3d-fd9e9012b2a7 00:11:13.253 14:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:13.513 14:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:13.513 14:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:13.513 14:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2835026 00:11:14.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:14.454 Nvme0n1 : 3.00 16235.33 63.42 0.00 0.00 0.00 0.00 0.00 00:11:14.454 [2024-10-07T12:21:38.163Z] =================================================================================================================== 00:11:14.454 [2024-10-07T12:21:38.163Z] Total : 16235.33 63.42 0.00 0.00 0.00 0.00 0.00 00:11:14.454 00:11:15.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:15.397 Nvme0n1 : 4.00 16291.25 63.64 0.00 0.00 0.00 0.00 0.00 00:11:15.397 [2024-10-07T12:21:39.106Z] =================================================================================================================== 00:11:15.397 [2024-10-07T12:21:39.106Z] Total : 16291.25 63.64 0.00 0.00 0.00 0.00 0.00 00:11:15.397 00:11:16.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:16.338 Nvme0n1 : 5.00 16325.40 63.77 0.00 0.00 0.00 0.00 0.00 00:11:16.338 [2024-10-07T12:21:40.047Z] =================================================================================================================== 00:11:16.338 [2024-10-07T12:21:40.047Z] Total : 16325.40 63.77 0.00 0.00 0.00 0.00 0.00 00:11:16.338 00:11:17.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:17.280 Nvme0n1 : 6.00 16342.83 63.84 0.00 0.00 0.00 0.00 0.00 00:11:17.280 [2024-10-07T12:21:40.989Z] =================================================================================================================== 00:11:17.280 [2024-10-07T12:21:40.989Z] Total : 16342.83 63.84 0.00 0.00 0.00 0.00 0.00 00:11:17.280 00:11:18.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.222 Nvme0n1 : 7.00 16360.86 63.91 0.00 0.00 0.00 0.00 0.00 00:11:18.222 [2024-10-07T12:21:41.931Z] =================================================================================================================== 00:11:18.222 [2024-10-07T12:21:41.931Z] Total : 16360.86 63.91 0.00 0.00 0.00 0.00 0.00 00:11:18.222 00:11:19.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:19.161 Nvme0n1 : 8.00 16373.50 63.96 0.00 0.00 0.00 0.00 0.00 00:11:19.161 [2024-10-07T12:21:42.870Z] =================================================================================================================== 00:11:19.161 [2024-10-07T12:21:42.870Z] Total : 16373.50 63.96 0.00 0.00 0.00 0.00 0.00 00:11:19.161 00:11:20.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:20.545 Nvme0n1 : 9.00 16389.78 64.02 0.00 0.00 0.00 0.00 0.00 00:11:20.545 [2024-10-07T12:21:44.254Z] =================================================================================================================== 00:11:20.545 [2024-10-07T12:21:44.254Z] Total : 16389.78 64.02 0.00 0.00 0.00 0.00 0.00 00:11:20.545 00:11:21.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:21.486 Nvme0n1 : 10.00 16396.20 64.05 0.00 0.00 0.00 0.00 0.00 00:11:21.486 [2024-10-07T12:21:45.195Z] =================================================================================================================== 00:11:21.486 [2024-10-07T12:21:45.195Z] Total : 16396.20 64.05 0.00 0.00 0.00 0.00 0.00 00:11:21.486 00:11:21.486 00:11:21.487 Latency(us) 00:11:21.487 [2024-10-07T12:21:45.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:21.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:21.487 Nvme0n1 : 10.00 16403.93 64.08 0.00 0.00 7799.29 4724.05 17257.81 00:11:21.487 [2024-10-07T12:21:45.196Z] =================================================================================================================== 00:11:21.487 [2024-10-07T12:21:45.196Z] Total : 16403.93 64.08 0.00 0.00 7799.29 4724.05 17257.81 00:11:21.487 { 00:11:21.487 "results": [ 00:11:21.487 { 00:11:21.487 "job": "Nvme0n1", 00:11:21.487 "core_mask": "0x2", 00:11:21.487 "workload": "randwrite", 00:11:21.487 "status": "finished", 00:11:21.487 "queue_depth": 128, 00:11:21.487 "io_size": 4096, 00:11:21.487 "runtime": 10.00309, 00:11:21.487 "iops": 16403.931185263755, 00:11:21.487 "mibps": 64.07785619243654, 00:11:21.487 "io_failed": 0, 00:11:21.487 "io_timeout": 0, 00:11:21.487 "avg_latency_us": 7799.288397017897, 00:11:21.487 "min_latency_us": 4724.053333333333, 00:11:21.487 "max_latency_us": 17257.81333333333 00:11:21.487 } 00:11:21.487 ], 00:11:21.487 "core_count": 1 00:11:21.487 } 00:11:21.487 14:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2834711 00:11:21.487 14:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2834711 ']' 00:11:21.487 14:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2834711 00:11:21.487 14:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:11:21.487 14:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:21.487 14:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2834711 00:11:21.487 14:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:21.487 14:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:21.487 14:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2834711' 00:11:21.487 killing process with pid 2834711 00:11:21.487 14:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2834711 00:11:21.487 Received shutdown signal, test time was about 10.000000 seconds 00:11:21.487 00:11:21.487 Latency(us) 00:11:21.487 [2024-10-07T12:21:45.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:21.487 [2024-10-07T12:21:45.196Z] =================================================================================================================== 00:11:21.487 [2024-10-07T12:21:45.196Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:21.487 14:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2834711 00:11:21.747 14:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:22.007 14:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:22.268 14:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90860389-b9fd-4556-9c3d-fd9e9012b2a7 00:11:22.268 14:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:22.268 14:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:22.268 14:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:22.268 14:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2830876 00:11:22.268 14:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2830876 00:11:22.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2830876 Killed "${NVMF_APP[@]}" "$@" 00:11:22.528 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:22.528 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:22.528 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:22.528 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:22.528 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:22.528 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=2837375 00:11:22.528 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 2837375 00:11:22.528 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:22.528 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2837375 ']' 00:11:22.528 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.528 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:22.528 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.528 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:22.528 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:22.528 [2024-10-07 14:21:46.108949] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:22.528 [2024-10-07 14:21:46.109068] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.789 [2024-10-07 14:21:46.243084] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.789 [2024-10-07 14:21:46.421476] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.789 [2024-10-07 14:21:46.421527] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.789 [2024-10-07 14:21:46.421538] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.789 [2024-10-07 14:21:46.421550] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.789 [2024-10-07 14:21:46.421559] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.789 [2024-10-07 14:21:46.422727] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.359 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.359 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:11:23.359 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:23.359 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:23.359 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:23.359 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.359 14:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:23.359 [2024-10-07 14:21:47.050580] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:23.359 [2024-10-07 14:21:47.050737] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:23.359 [2024-10-07 14:21:47.050785] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:23.360 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:23.360 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a63329be-5bce-4746-b0e5-1ed98a768dba 00:11:23.360 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=a63329be-5bce-4746-b0e5-1ed98a768dba 00:11:23.621 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:23.621 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:11:23.621 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:23.621 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:23.621 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:23.621 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a63329be-5bce-4746-b0e5-1ed98a768dba -t 2000 00:11:23.881 [ 00:11:23.881 { 00:11:23.881 "name": "a63329be-5bce-4746-b0e5-1ed98a768dba", 00:11:23.881 "aliases": [ 00:11:23.881 "lvs/lvol" 00:11:23.881 ], 00:11:23.881 "product_name": "Logical Volume", 00:11:23.881 "block_size": 4096, 00:11:23.881 "num_blocks": 38912, 00:11:23.881 "uuid": "a63329be-5bce-4746-b0e5-1ed98a768dba", 00:11:23.881 "assigned_rate_limits": { 00:11:23.881 "rw_ios_per_sec": 0, 00:11:23.881 "rw_mbytes_per_sec": 0, 00:11:23.881 "r_mbytes_per_sec": 0, 00:11:23.881 "w_mbytes_per_sec": 0 00:11:23.881 }, 00:11:23.881 "claimed": false, 00:11:23.881 "zoned": false, 00:11:23.881 "supported_io_types": { 00:11:23.881 "read": true, 00:11:23.881 "write": true, 00:11:23.881 "unmap": true, 00:11:23.881 "flush": false, 00:11:23.881 "reset": true, 00:11:23.881 "nvme_admin": false, 00:11:23.881 "nvme_io": false, 00:11:23.881 "nvme_io_md": false, 00:11:23.881 "write_zeroes": true, 00:11:23.881 "zcopy": false, 00:11:23.881 "get_zone_info": false, 00:11:23.881 "zone_management": false, 00:11:23.881 "zone_append": false, 00:11:23.881 "compare": false, 00:11:23.881 "compare_and_write": false, 00:11:23.881 "abort": false, 00:11:23.881 "seek_hole": true, 00:11:23.881 "seek_data": true, 00:11:23.881 "copy": false, 00:11:23.881 "nvme_iov_md": false 00:11:23.881 }, 00:11:23.881 "driver_specific": { 00:11:23.881 "lvol": { 00:11:23.881 "lvol_store_uuid": "90860389-b9fd-4556-9c3d-fd9e9012b2a7", 00:11:23.881 "base_bdev": "aio_bdev", 00:11:23.881 "thin_provision": false, 00:11:23.881 "num_allocated_clusters": 38, 00:11:23.881 "snapshot": false, 00:11:23.881 "clone": false, 00:11:23.881 "esnap_clone": false 00:11:23.881 } 00:11:23.881 } 00:11:23.881 } 00:11:23.881 ] 00:11:23.881 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:11:23.881 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90860389-b9fd-4556-9c3d-fd9e9012b2a7 00:11:23.881 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:23.881 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:23.881 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90860389-b9fd-4556-9c3d-fd9e9012b2a7 00:11:23.881 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:24.142 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:24.142 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:24.403 [2024-10-07 14:21:47.882368] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:24.403 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90860389-b9fd-4556-9c3d-fd9e9012b2a7 00:11:24.403 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:11:24.403 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90860389-b9fd-4556-9c3d-fd9e9012b2a7 00:11:24.403 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.403 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:24.403 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.403 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:24.403 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.403 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:24.403 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:24.403 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:24.403 14:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90860389-b9fd-4556-9c3d-fd9e9012b2a7 00:11:24.403 request: 00:11:24.403 { 00:11:24.403 "uuid": "90860389-b9fd-4556-9c3d-fd9e9012b2a7", 00:11:24.403 "method": "bdev_lvol_get_lvstores", 00:11:24.403 "req_id": 1 00:11:24.403 } 00:11:24.403 Got JSON-RPC error response 00:11:24.403 response: 00:11:24.403 { 00:11:24.403 "code": -19, 00:11:24.403 "message": "No such device" 00:11:24.403 } 00:11:24.403 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:11:24.403 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:24.403 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:24.403 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:24.403 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:24.663 aio_bdev 00:11:24.663 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a63329be-5bce-4746-b0e5-1ed98a768dba 00:11:24.663 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=a63329be-5bce-4746-b0e5-1ed98a768dba 00:11:24.663 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:24.663 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:11:24.663 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:24.663 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:24.663 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:24.925 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a63329be-5bce-4746-b0e5-1ed98a768dba -t 2000 00:11:24.925 [ 00:11:24.925 { 00:11:24.925 "name": "a63329be-5bce-4746-b0e5-1ed98a768dba", 00:11:24.925 "aliases": [ 00:11:24.925 "lvs/lvol" 00:11:24.925 ], 00:11:24.925 "product_name": "Logical Volume", 00:11:24.925 "block_size": 4096, 00:11:24.925 "num_blocks": 38912, 00:11:24.925 "uuid": "a63329be-5bce-4746-b0e5-1ed98a768dba", 00:11:24.925 "assigned_rate_limits": { 00:11:24.925 "rw_ios_per_sec": 0, 00:11:24.925 "rw_mbytes_per_sec": 0, 00:11:24.925 "r_mbytes_per_sec": 0, 00:11:24.925 "w_mbytes_per_sec": 0 00:11:24.925 }, 00:11:24.925 "claimed": false, 00:11:24.925 "zoned": false, 00:11:24.925 "supported_io_types": { 00:11:24.925 "read": true, 00:11:24.925 "write": true, 00:11:24.925 "unmap": true, 00:11:24.925 "flush": false, 00:11:24.925 "reset": true, 00:11:24.925 "nvme_admin": false, 00:11:24.925 "nvme_io": false, 00:11:24.925 "nvme_io_md": false, 00:11:24.925 "write_zeroes": true, 00:11:24.925 "zcopy": false, 00:11:24.925 "get_zone_info": false, 00:11:24.925 "zone_management": false, 00:11:24.925 "zone_append": false, 00:11:24.925 "compare": false, 00:11:24.925 "compare_and_write": false, 00:11:24.925 "abort": false, 00:11:24.925 "seek_hole": true, 00:11:24.925 "seek_data": true, 00:11:24.925 "copy": false, 00:11:24.925 "nvme_iov_md": false 00:11:24.925 }, 00:11:24.925 "driver_specific": { 00:11:24.925 "lvol": { 00:11:24.925 "lvol_store_uuid": "90860389-b9fd-4556-9c3d-fd9e9012b2a7", 00:11:24.925 "base_bdev": "aio_bdev", 00:11:24.925 "thin_provision": false, 00:11:24.925 "num_allocated_clusters": 38, 00:11:24.925 "snapshot": false, 00:11:24.925 "clone": false, 00:11:24.925 "esnap_clone": false 00:11:24.925 } 00:11:24.925 } 00:11:24.925 } 00:11:24.925 ] 00:11:24.925 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:11:24.925 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90860389-b9fd-4556-9c3d-fd9e9012b2a7 00:11:24.925 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:25.186 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:25.186 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 90860389-b9fd-4556-9c3d-fd9e9012b2a7 00:11:25.186 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:25.447 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:25.447 14:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a63329be-5bce-4746-b0e5-1ed98a768dba 00:11:25.447 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 90860389-b9fd-4556-9c3d-fd9e9012b2a7 00:11:25.708 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:25.708 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:25.969 00:11:25.969 real 0m17.874s 00:11:25.969 user 0m46.704s 00:11:25.969 sys 0m2.963s 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:25.969 ************************************ 00:11:25.969 END TEST lvs_grow_dirty 00:11:25.969 ************************************ 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:25.969 nvmf_trace.0 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:25.969 rmmod nvme_tcp 00:11:25.969 rmmod nvme_fabrics 00:11:25.969 rmmod nvme_keyring 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 2837375 ']' 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 2837375 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2837375 ']' 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2837375 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:25.969 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2837375 00:11:26.231 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:26.231 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:26.231 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2837375' 00:11:26.231 killing process with pid 2837375 00:11:26.231 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2837375 00:11:26.231 14:21:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2837375 00:11:27.171 14:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:27.171 14:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:27.171 14:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:27.171 14:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:27.171 14:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:11:27.171 14:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:27.171 14:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:11:27.171 14:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.171 14:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:27.171 14:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.171 14:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.171 14:21:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.083 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:29.083 00:11:29.083 real 0m45.999s 00:11:29.083 user 1m9.236s 00:11:29.083 sys 0m10.625s 00:11:29.083 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:29.083 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:29.083 ************************************ 00:11:29.083 END TEST nvmf_lvs_grow 00:11:29.083 ************************************ 00:11:29.083 14:21:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:29.083 14:21:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:29.083 14:21:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:29.083 14:21:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:29.083 ************************************ 00:11:29.083 START TEST nvmf_bdev_io_wait 00:11:29.083 ************************************ 00:11:29.083 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:29.345 * Looking for test storage... 00:11:29.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.345 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:29.345 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:11:29.345 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:29.345 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:29.345 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.345 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.345 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.345 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.345 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.345 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.345 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.345 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.345 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.345 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.345 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.345 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:29.345 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:29.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.346 --rc genhtml_branch_coverage=1 00:11:29.346 --rc genhtml_function_coverage=1 00:11:29.346 --rc genhtml_legend=1 00:11:29.346 --rc geninfo_all_blocks=1 00:11:29.346 --rc geninfo_unexecuted_blocks=1 00:11:29.346 00:11:29.346 ' 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:29.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.346 --rc genhtml_branch_coverage=1 00:11:29.346 --rc genhtml_function_coverage=1 00:11:29.346 --rc genhtml_legend=1 00:11:29.346 --rc geninfo_all_blocks=1 00:11:29.346 --rc geninfo_unexecuted_blocks=1 00:11:29.346 00:11:29.346 ' 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:29.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.346 --rc genhtml_branch_coverage=1 00:11:29.346 --rc genhtml_function_coverage=1 00:11:29.346 --rc genhtml_legend=1 00:11:29.346 --rc geninfo_all_blocks=1 00:11:29.346 --rc geninfo_unexecuted_blocks=1 00:11:29.346 00:11:29.346 ' 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:29.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.346 --rc genhtml_branch_coverage=1 00:11:29.346 --rc genhtml_function_coverage=1 00:11:29.346 --rc genhtml_legend=1 00:11:29.346 --rc geninfo_all_blocks=1 00:11:29.346 --rc geninfo_unexecuted_blocks=1 00:11:29.346 00:11:29.346 ' 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.346 14:21:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.496 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:37.497 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:37.497 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:37.497 Found net devices under 0000:31:00.0: cvl_0_0 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:37.497 Found net devices under 0000:31:00.1: cvl_0_1 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:37.497 14:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:37.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:11:37.497 00:11:37.497 --- 10.0.0.2 ping statistics --- 00:11:37.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.497 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:11:37.497 00:11:37.497 --- 10.0.0.1 ping statistics --- 00:11:37.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.497 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=2842534 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 2842534 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2842534 ']' 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:37.497 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.497 [2024-10-07 14:22:00.201066] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:37.498 [2024-10-07 14:22:00.201220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.498 [2024-10-07 14:22:00.326644] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.498 [2024-10-07 14:22:00.507488] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.498 [2024-10-07 14:22:00.507540] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.498 [2024-10-07 14:22:00.507552] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.498 [2024-10-07 14:22:00.507564] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.498 [2024-10-07 14:22:00.507573] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.498 [2024-10-07 14:22:00.509855] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.498 [2024-10-07 14:22:00.509942] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.498 [2024-10-07 14:22:00.510106] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.498 [2024-10-07 14:22:00.510271] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.498 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:37.498 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:11:37.498 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:37.498 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:37.498 14:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.498 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.498 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:37.498 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.498 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.498 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.498 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:37.498 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.498 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.498 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.498 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:37.498 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.498 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.498 [2024-10-07 14:22:01.193498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.498 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.498 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:37.498 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.498 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.759 Malloc0 00:11:37.759 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.759 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:37.759 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.759 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.759 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.759 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.760 [2024-10-07 14:22:01.302461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2842882 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2842885 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:37.760 { 00:11:37.760 "params": { 00:11:37.760 "name": "Nvme$subsystem", 00:11:37.760 "trtype": "$TEST_TRANSPORT", 00:11:37.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:37.760 "adrfam": "ipv4", 00:11:37.760 "trsvcid": "$NVMF_PORT", 00:11:37.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:37.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:37.760 "hdgst": ${hdgst:-false}, 00:11:37.760 "ddgst": ${ddgst:-false} 00:11:37.760 }, 00:11:37.760 "method": "bdev_nvme_attach_controller" 00:11:37.760 } 00:11:37.760 EOF 00:11:37.760 )") 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2842887 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2842891 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:37.760 { 00:11:37.760 "params": { 00:11:37.760 "name": "Nvme$subsystem", 00:11:37.760 "trtype": "$TEST_TRANSPORT", 00:11:37.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:37.760 "adrfam": "ipv4", 00:11:37.760 "trsvcid": "$NVMF_PORT", 00:11:37.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:37.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:37.760 "hdgst": ${hdgst:-false}, 00:11:37.760 "ddgst": ${ddgst:-false} 00:11:37.760 }, 00:11:37.760 "method": "bdev_nvme_attach_controller" 00:11:37.760 } 00:11:37.760 EOF 00:11:37.760 )") 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:37.760 { 00:11:37.760 "params": { 00:11:37.760 "name": "Nvme$subsystem", 00:11:37.760 "trtype": "$TEST_TRANSPORT", 00:11:37.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:37.760 "adrfam": "ipv4", 00:11:37.760 "trsvcid": "$NVMF_PORT", 00:11:37.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:37.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:37.760 "hdgst": ${hdgst:-false}, 00:11:37.760 "ddgst": ${ddgst:-false} 00:11:37.760 }, 00:11:37.760 "method": "bdev_nvme_attach_controller" 00:11:37.760 } 00:11:37.760 EOF 00:11:37.760 )") 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:37.760 { 00:11:37.760 "params": { 00:11:37.760 "name": "Nvme$subsystem", 00:11:37.760 "trtype": "$TEST_TRANSPORT", 00:11:37.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:37.760 "adrfam": "ipv4", 00:11:37.760 "trsvcid": "$NVMF_PORT", 00:11:37.760 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:37.760 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:37.760 "hdgst": ${hdgst:-false}, 00:11:37.760 "ddgst": ${ddgst:-false} 00:11:37.760 }, 00:11:37.760 "method": "bdev_nvme_attach_controller" 00:11:37.760 } 00:11:37.760 EOF 00:11:37.760 )") 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2842882 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:37.760 "params": { 00:11:37.760 "name": "Nvme1", 00:11:37.760 "trtype": "tcp", 00:11:37.760 "traddr": "10.0.0.2", 00:11:37.760 "adrfam": "ipv4", 00:11:37.760 "trsvcid": "4420", 00:11:37.760 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:37.760 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:37.760 "hdgst": false, 00:11:37.760 "ddgst": false 00:11:37.760 }, 00:11:37.760 "method": "bdev_nvme_attach_controller" 00:11:37.760 }' 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:37.760 "params": { 00:11:37.760 "name": "Nvme1", 00:11:37.760 "trtype": "tcp", 00:11:37.760 "traddr": "10.0.0.2", 00:11:37.760 "adrfam": "ipv4", 00:11:37.760 "trsvcid": "4420", 00:11:37.760 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:37.760 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:37.760 "hdgst": false, 00:11:37.760 "ddgst": false 00:11:37.760 }, 00:11:37.760 "method": "bdev_nvme_attach_controller" 00:11:37.760 }' 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:37.760 "params": { 00:11:37.760 "name": "Nvme1", 00:11:37.760 "trtype": "tcp", 00:11:37.760 "traddr": "10.0.0.2", 00:11:37.760 "adrfam": "ipv4", 00:11:37.760 "trsvcid": "4420", 00:11:37.760 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:37.760 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:37.760 "hdgst": false, 00:11:37.760 "ddgst": false 00:11:37.760 }, 00:11:37.760 "method": "bdev_nvme_attach_controller" 00:11:37.760 }' 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:37.760 14:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:37.760 "params": { 00:11:37.760 "name": "Nvme1", 00:11:37.760 "trtype": "tcp", 00:11:37.760 "traddr": "10.0.0.2", 00:11:37.760 "adrfam": "ipv4", 00:11:37.760 "trsvcid": "4420", 00:11:37.760 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:37.760 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:37.760 "hdgst": false, 00:11:37.760 "ddgst": false 00:11:37.760 }, 00:11:37.760 "method": "bdev_nvme_attach_controller" 00:11:37.760 }' 00:11:37.760 [2024-10-07 14:22:01.378625] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:37.760 [2024-10-07 14:22:01.378720] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:37.760 [2024-10-07 14:22:01.385986] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:37.760 [2024-10-07 14:22:01.386095] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:37.761 [2024-10-07 14:22:01.386656] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:37.761 [2024-10-07 14:22:01.386669] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:37.761 [2024-10-07 14:22:01.386752] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:37.761 [2024-10-07 14:22:01.386763] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:38.021 [2024-10-07 14:22:01.536307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.021 [2024-10-07 14:22:01.573830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.021 [2024-10-07 14:22:01.620521] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.021 [2024-10-07 14:22:01.683937] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.021 [2024-10-07 14:22:01.710594] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:11:38.281 [2024-10-07 14:22:01.747870] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:11:38.281 [2024-10-07 14:22:01.795428] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:11:38.281 [2024-10-07 14:22:01.862692] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:11:38.542 Running I/O for 1 seconds... 00:11:38.542 Running I/O for 1 seconds... 00:11:38.802 Running I/O for 1 seconds... 00:11:38.802 Running I/O for 1 seconds... 00:11:39.743 8406.00 IOPS, 32.84 MiB/s 00:11:39.743 Latency(us) 00:11:39.743 [2024-10-07T12:22:03.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.743 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:39.743 Nvme1n1 : 1.02 8383.43 32.75 0.00 0.00 15101.25 8465.07 22937.60 00:11:39.743 [2024-10-07T12:22:03.452Z] =================================================================================================================== 00:11:39.743 [2024-10-07T12:22:03.452Z] Total : 8383.43 32.75 0.00 0.00 15101.25 8465.07 22937.60 00:11:39.743 174208.00 IOPS, 680.50 MiB/s 00:11:39.743 Latency(us) 00:11:39.743 [2024-10-07T12:22:03.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.743 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:39.743 Nvme1n1 : 1.00 173841.72 679.07 0.00 0.00 732.34 337.92 2102.61 00:11:39.743 [2024-10-07T12:22:03.452Z] =================================================================================================================== 00:11:39.743 [2024-10-07T12:22:03.452Z] Total : 173841.72 679.07 0.00 0.00 732.34 337.92 2102.61 00:11:39.743 18541.00 IOPS, 72.43 MiB/s 00:11:39.743 Latency(us) 00:11:39.743 [2024-10-07T12:22:03.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.743 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:39.743 Nvme1n1 : 1.01 18599.50 72.65 0.00 0.00 6862.20 3522.56 15619.41 00:11:39.743 [2024-10-07T12:22:03.452Z] =================================================================================================================== 00:11:39.743 [2024-10-07T12:22:03.452Z] Total : 18599.50 72.65 0.00 0.00 6862.20 3522.56 15619.41 00:11:39.743 8270.00 IOPS, 32.30 MiB/s 00:11:39.743 Latency(us) 00:11:39.743 [2024-10-07T12:22:03.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.743 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:39.743 Nvme1n1 : 1.01 8374.88 32.71 0.00 0.00 15236.38 4724.05 38666.24 00:11:39.743 [2024-10-07T12:22:03.452Z] =================================================================================================================== 00:11:39.743 [2024-10-07T12:22:03.452Z] Total : 8374.88 32.71 0.00 0.00 15236.38 4724.05 38666.24 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2842885 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2842887 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2842891 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:40.683 rmmod nvme_tcp 00:11:40.683 rmmod nvme_fabrics 00:11:40.683 rmmod nvme_keyring 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 2842534 ']' 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 2842534 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2842534 ']' 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2842534 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2842534 00:11:40.683 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:40.684 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:40.684 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2842534' 00:11:40.684 killing process with pid 2842534 00:11:40.684 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2842534 00:11:40.684 14:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2842534 00:11:41.625 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:41.625 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:41.625 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:41.625 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:41.625 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:11:41.625 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:41.625 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:11:41.625 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:41.625 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:41.625 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.625 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.625 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.538 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:43.538 00:11:43.538 real 0m14.389s 00:11:43.538 user 0m28.223s 00:11:43.538 sys 0m7.490s 00:11:43.538 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:43.538 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:43.538 ************************************ 00:11:43.538 END TEST nvmf_bdev_io_wait 00:11:43.538 ************************************ 00:11:43.538 14:22:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:43.538 14:22:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:43.538 14:22:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.538 14:22:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:43.538 ************************************ 00:11:43.538 START TEST nvmf_queue_depth 00:11:43.538 ************************************ 00:11:43.538 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:43.810 * Looking for test storage... 00:11:43.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:43.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.810 --rc genhtml_branch_coverage=1 00:11:43.810 --rc genhtml_function_coverage=1 00:11:43.810 --rc genhtml_legend=1 00:11:43.810 --rc geninfo_all_blocks=1 00:11:43.810 --rc geninfo_unexecuted_blocks=1 00:11:43.810 00:11:43.810 ' 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:43.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.810 --rc genhtml_branch_coverage=1 00:11:43.810 --rc genhtml_function_coverage=1 00:11:43.810 --rc genhtml_legend=1 00:11:43.810 --rc geninfo_all_blocks=1 00:11:43.810 --rc geninfo_unexecuted_blocks=1 00:11:43.810 00:11:43.810 ' 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:43.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.810 --rc genhtml_branch_coverage=1 00:11:43.810 --rc genhtml_function_coverage=1 00:11:43.810 --rc genhtml_legend=1 00:11:43.810 --rc geninfo_all_blocks=1 00:11:43.810 --rc geninfo_unexecuted_blocks=1 00:11:43.810 00:11:43.810 ' 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:43.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.810 --rc genhtml_branch_coverage=1 00:11:43.810 --rc genhtml_function_coverage=1 00:11:43.810 --rc genhtml_legend=1 00:11:43.810 --rc geninfo_all_blocks=1 00:11:43.810 --rc geninfo_unexecuted_blocks=1 00:11:43.810 00:11:43.810 ' 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:11:43.810 14:22:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:51.949 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:51.949 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:51.949 Found net devices under 0000:31:00.0: cvl_0_0 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.949 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:51.950 Found net devices under 0000:31:00.1: cvl_0_1 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:51.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.735 ms 00:11:51.950 00:11:51.950 --- 10.0.0.2 ping statistics --- 00:11:51.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.950 rtt min/avg/max/mdev = 0.735/0.735/0.735/0.000 ms 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:11:51.950 00:11:51.950 --- 10.0.0.1 ping statistics --- 00:11:51.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.950 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=2847911 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 2847911 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2847911 ']' 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:51.950 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:51.950 [2024-10-07 14:22:15.095364] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:51.950 [2024-10-07 14:22:15.095563] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.950 [2024-10-07 14:22:15.256897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.950 [2024-10-07 14:22:15.481831] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.950 [2024-10-07 14:22:15.481909] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.950 [2024-10-07 14:22:15.481923] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.950 [2024-10-07 14:22:15.481937] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.950 [2024-10-07 14:22:15.481948] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.950 [2024-10-07 14:22:15.483441] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.212 14:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:52.212 14:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:52.212 14:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:52.212 14:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:52.212 14:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:52.212 14:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.212 14:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.212 14:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.212 14:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:52.212 [2024-10-07 14:22:15.921160] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.473 14:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.473 14:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:52.473 14:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.473 14:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:52.473 Malloc0 00:11:52.473 14:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.473 14:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:52.473 14:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.473 14:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:52.473 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.473 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:52.474 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.474 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:52.474 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.474 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.474 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.474 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:52.474 [2024-10-07 14:22:16.027792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.474 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.474 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2848027 00:11:52.474 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:52.474 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:52.474 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2848027 /var/tmp/bdevperf.sock 00:11:52.474 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2848027 ']' 00:11:52.474 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:52.474 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:52.474 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:52.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:52.474 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:52.474 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:52.474 [2024-10-07 14:22:16.121318] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:52.474 [2024-10-07 14:22:16.121447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848027 ] 00:11:52.734 [2024-10-07 14:22:16.252501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.734 [2024-10-07 14:22:16.435096] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.306 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:53.306 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:53.306 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:53.306 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.306 14:22:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:53.567 NVMe0n1 00:11:53.567 14:22:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.567 14:22:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:53.567 Running I/O for 10 seconds... 00:11:55.509 7502.00 IOPS, 29.30 MiB/s [2024-10-07T12:22:20.260Z] 8714.50 IOPS, 34.04 MiB/s [2024-10-07T12:22:21.202Z] 9474.00 IOPS, 37.01 MiB/s [2024-10-07T12:22:22.588Z] 9726.25 IOPS, 37.99 MiB/s [2024-10-07T12:22:23.530Z] 9994.80 IOPS, 39.04 MiB/s [2024-10-07T12:22:24.472Z] 10101.83 IOPS, 39.46 MiB/s [2024-10-07T12:22:25.414Z] 10236.43 IOPS, 39.99 MiB/s [2024-10-07T12:22:26.355Z] 10319.88 IOPS, 40.31 MiB/s [2024-10-07T12:22:27.298Z] 10357.00 IOPS, 40.46 MiB/s [2024-10-07T12:22:27.298Z] 10439.80 IOPS, 40.78 MiB/s 00:12:03.589 Latency(us) 00:12:03.589 [2024-10-07T12:22:27.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.589 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:03.589 Verification LBA range: start 0x0 length 0x4000 00:12:03.589 NVMe0n1 : 10.08 10455.42 40.84 0.00 0.00 97568.72 25886.72 82138.45 00:12:03.589 [2024-10-07T12:22:27.298Z] =================================================================================================================== 00:12:03.589 [2024-10-07T12:22:27.298Z] Total : 10455.42 40.84 0.00 0.00 97568.72 25886.72 82138.45 00:12:03.589 { 00:12:03.589 "results": [ 00:12:03.589 { 00:12:03.589 "job": "NVMe0n1", 00:12:03.589 "core_mask": "0x1", 00:12:03.589 "workload": "verify", 00:12:03.589 "status": "finished", 00:12:03.589 "verify_range": { 00:12:03.589 "start": 0, 00:12:03.589 "length": 16384 00:12:03.589 }, 00:12:03.589 "queue_depth": 1024, 00:12:03.589 "io_size": 4096, 00:12:03.589 "runtime": 10.081083, 00:12:03.589 "iops": 10455.42428328385, 00:12:03.589 "mibps": 40.84150110657754, 00:12:03.589 "io_failed": 0, 00:12:03.589 "io_timeout": 0, 00:12:03.589 "avg_latency_us": 97568.72365989261, 00:12:03.589 "min_latency_us": 25886.72, 00:12:03.589 "max_latency_us": 82138.45333333334 00:12:03.589 } 00:12:03.589 ], 00:12:03.589 "core_count": 1 00:12:03.589 } 00:12:03.852 14:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2848027 00:12:03.852 14:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2848027 ']' 00:12:03.852 14:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2848027 00:12:03.852 14:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:12:03.852 14:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:03.852 14:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2848027 00:12:03.852 14:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:03.852 14:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:03.852 14:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2848027' 00:12:03.852 killing process with pid 2848027 00:12:03.852 14:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2848027 00:12:03.852 Received shutdown signal, test time was about 10.000000 seconds 00:12:03.852 00:12:03.852 Latency(us) 00:12:03.852 [2024-10-07T12:22:27.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.852 [2024-10-07T12:22:27.561Z] =================================================================================================================== 00:12:03.852 [2024-10-07T12:22:27.561Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:03.852 14:22:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2848027 00:12:04.423 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:04.423 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:04.423 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:04.423 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:12:04.423 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:04.423 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:12:04.423 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:04.423 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:04.423 rmmod nvme_tcp 00:12:04.423 rmmod nvme_fabrics 00:12:04.423 rmmod nvme_keyring 00:12:04.684 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:04.684 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:12:04.684 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:12:04.684 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 2847911 ']' 00:12:04.684 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 2847911 00:12:04.684 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2847911 ']' 00:12:04.684 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2847911 00:12:04.684 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:12:04.684 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:04.684 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2847911 00:12:04.684 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:04.684 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:04.684 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2847911' 00:12:04.684 killing process with pid 2847911 00:12:04.684 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2847911 00:12:04.684 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2847911 00:12:05.254 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:05.254 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:05.254 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:05.254 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:12:05.254 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:12:05.254 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:05.254 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:12:05.515 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:05.515 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:05.515 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.515 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.515 14:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.429 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:07.429 00:12:07.429 real 0m23.835s 00:12:07.429 user 0m27.499s 00:12:07.429 sys 0m7.222s 00:12:07.429 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.429 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:07.429 ************************************ 00:12:07.429 END TEST nvmf_queue_depth 00:12:07.429 ************************************ 00:12:07.429 14:22:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:07.429 14:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:07.429 14:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.429 14:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:07.429 ************************************ 00:12:07.429 START TEST nvmf_target_multipath 00:12:07.429 ************************************ 00:12:07.429 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:07.691 * Looking for test storage... 00:12:07.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.691 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:07.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.692 --rc genhtml_branch_coverage=1 00:12:07.692 --rc genhtml_function_coverage=1 00:12:07.692 --rc genhtml_legend=1 00:12:07.692 --rc geninfo_all_blocks=1 00:12:07.692 --rc geninfo_unexecuted_blocks=1 00:12:07.692 00:12:07.692 ' 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:07.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.692 --rc genhtml_branch_coverage=1 00:12:07.692 --rc genhtml_function_coverage=1 00:12:07.692 --rc genhtml_legend=1 00:12:07.692 --rc geninfo_all_blocks=1 00:12:07.692 --rc geninfo_unexecuted_blocks=1 00:12:07.692 00:12:07.692 ' 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:07.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.692 --rc genhtml_branch_coverage=1 00:12:07.692 --rc genhtml_function_coverage=1 00:12:07.692 --rc genhtml_legend=1 00:12:07.692 --rc geninfo_all_blocks=1 00:12:07.692 --rc geninfo_unexecuted_blocks=1 00:12:07.692 00:12:07.692 ' 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:07.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.692 --rc genhtml_branch_coverage=1 00:12:07.692 --rc genhtml_function_coverage=1 00:12:07.692 --rc genhtml_legend=1 00:12:07.692 --rc geninfo_all_blocks=1 00:12:07.692 --rc geninfo_unexecuted_blocks=1 00:12:07.692 00:12:07.692 ' 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:07.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:12:07.692 14:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.835 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:15.836 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:15.836 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:15.836 Found net devices under 0000:31:00.0: cvl_0_0 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:15.836 Found net devices under 0000:31:00.1: cvl_0_1 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:15.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:12:15.836 00:12:15.836 --- 10.0.0.2 ping statistics --- 00:12:15.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.836 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:12:15.836 00:12:15.836 --- 10.0.0.1 ping statistics --- 00:12:15.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.836 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:15.836 only one NIC for nvmf test 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:15.836 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:15.837 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:15.837 rmmod nvme_tcp 00:12:15.837 rmmod nvme_fabrics 00:12:15.837 rmmod nvme_keyring 00:12:15.837 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:15.837 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:15.837 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:15.837 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:12:15.837 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:15.837 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:15.837 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:15.837 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:15.837 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:12:15.837 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:15.837 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:12:15.837 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:15.837 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:15.837 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.837 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.837 14:22:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:17.224 00:12:17.224 real 0m9.637s 00:12:17.224 user 0m2.009s 00:12:17.224 sys 0m5.531s 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:17.224 ************************************ 00:12:17.224 END TEST nvmf_target_multipath 00:12:17.224 ************************************ 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.224 14:22:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:17.224 ************************************ 00:12:17.224 START TEST nvmf_zcopy 00:12:17.224 ************************************ 00:12:17.225 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:17.486 * Looking for test storage... 00:12:17.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.486 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:17.486 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:17.486 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:12:17.486 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:17.486 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.486 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:17.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.487 --rc genhtml_branch_coverage=1 00:12:17.487 --rc genhtml_function_coverage=1 00:12:17.487 --rc genhtml_legend=1 00:12:17.487 --rc geninfo_all_blocks=1 00:12:17.487 --rc geninfo_unexecuted_blocks=1 00:12:17.487 00:12:17.487 ' 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:17.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.487 --rc genhtml_branch_coverage=1 00:12:17.487 --rc genhtml_function_coverage=1 00:12:17.487 --rc genhtml_legend=1 00:12:17.487 --rc geninfo_all_blocks=1 00:12:17.487 --rc geninfo_unexecuted_blocks=1 00:12:17.487 00:12:17.487 ' 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:17.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.487 --rc genhtml_branch_coverage=1 00:12:17.487 --rc genhtml_function_coverage=1 00:12:17.487 --rc genhtml_legend=1 00:12:17.487 --rc geninfo_all_blocks=1 00:12:17.487 --rc geninfo_unexecuted_blocks=1 00:12:17.487 00:12:17.487 ' 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:17.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.487 --rc genhtml_branch_coverage=1 00:12:17.487 --rc genhtml_function_coverage=1 00:12:17.487 --rc genhtml_legend=1 00:12:17.487 --rc geninfo_all_blocks=1 00:12:17.487 --rc geninfo_unexecuted_blocks=1 00:12:17.487 00:12:17.487 ' 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:17.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:17.487 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:17.488 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:12:17.488 14:22:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:25.633 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:25.633 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:25.633 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:25.634 Found net devices under 0000:31:00.0: cvl_0_0 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:25.634 Found net devices under 0000:31:00.1: cvl_0_1 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:25.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:12:25.634 00:12:25.634 --- 10.0.0.2 ping statistics --- 00:12:25.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.634 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:12:25.634 00:12:25.634 --- 10.0.0.1 ping statistics --- 00:12:25.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.634 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=2859184 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 2859184 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2859184 ']' 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:25.634 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:25.634 [2024-10-07 14:22:48.712601] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:12:25.634 [2024-10-07 14:22:48.712730] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.634 [2024-10-07 14:22:48.873637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.634 [2024-10-07 14:22:49.105038] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.634 [2024-10-07 14:22:49.105111] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.634 [2024-10-07 14:22:49.105125] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.634 [2024-10-07 14:22:49.105139] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.634 [2024-10-07 14:22:49.105149] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.634 [2024-10-07 14:22:49.106624] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:25.896 [2024-10-07 14:22:49.534174] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:25.896 [2024-10-07 14:22:49.558534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.896 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:26.158 malloc0 00:12:26.158 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.158 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:26.158 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.158 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:26.158 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.158 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:26.158 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:26.158 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:12:26.158 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:12:26.158 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:26.158 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:26.158 { 00:12:26.158 "params": { 00:12:26.158 "name": "Nvme$subsystem", 00:12:26.158 "trtype": "$TEST_TRANSPORT", 00:12:26.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:26.158 "adrfam": "ipv4", 00:12:26.158 "trsvcid": "$NVMF_PORT", 00:12:26.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:26.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:26.158 "hdgst": ${hdgst:-false}, 00:12:26.158 "ddgst": ${ddgst:-false} 00:12:26.158 }, 00:12:26.158 "method": "bdev_nvme_attach_controller" 00:12:26.158 } 00:12:26.158 EOF 00:12:26.158 )") 00:12:26.158 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:12:26.158 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:12:26.158 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:12:26.158 14:22:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:26.158 "params": { 00:12:26.158 "name": "Nvme1", 00:12:26.158 "trtype": "tcp", 00:12:26.158 "traddr": "10.0.0.2", 00:12:26.158 "adrfam": "ipv4", 00:12:26.158 "trsvcid": "4420", 00:12:26.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:26.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:26.158 "hdgst": false, 00:12:26.158 "ddgst": false 00:12:26.158 }, 00:12:26.158 "method": "bdev_nvme_attach_controller" 00:12:26.158 }' 00:12:26.158 [2024-10-07 14:22:49.743667] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:12:26.158 [2024-10-07 14:22:49.743791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859515 ] 00:12:26.419 [2024-10-07 14:22:49.871978] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.419 [2024-10-07 14:22:50.059576] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.987 Running I/O for 10 seconds... 00:12:28.868 6003.00 IOPS, 46.90 MiB/s [2024-10-07T12:22:53.516Z] 6045.50 IOPS, 47.23 MiB/s [2024-10-07T12:22:54.457Z] 6791.33 IOPS, 53.06 MiB/s [2024-10-07T12:22:55.839Z] 7290.25 IOPS, 56.96 MiB/s [2024-10-07T12:22:56.779Z] 7587.80 IOPS, 59.28 MiB/s [2024-10-07T12:22:57.719Z] 7784.50 IOPS, 60.82 MiB/s [2024-10-07T12:22:58.659Z] 7927.57 IOPS, 61.93 MiB/s [2024-10-07T12:22:59.598Z] 8033.75 IOPS, 62.76 MiB/s [2024-10-07T12:23:00.539Z] 8117.44 IOPS, 63.42 MiB/s [2024-10-07T12:23:00.539Z] 8180.20 IOPS, 63.91 MiB/s 00:12:36.830 Latency(us) 00:12:36.830 [2024-10-07T12:23:00.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.830 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:36.830 Verification LBA range: start 0x0 length 0x1000 00:12:36.830 Nvme1n1 : 10.01 8181.14 63.92 0.00 0.00 15588.36 1788.59 31894.19 00:12:36.830 [2024-10-07T12:23:00.539Z] =================================================================================================================== 00:12:36.830 [2024-10-07T12:23:00.539Z] Total : 8181.14 63.92 0.00 0.00 15588.36 1788.59 31894.19 00:12:37.771 14:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2861646 00:12:37.771 14:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:37.771 14:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:37.771 14:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:37.771 14:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:37.771 14:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:12:37.771 14:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:12:37.771 14:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:37.771 14:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:37.771 { 00:12:37.771 "params": { 00:12:37.771 "name": "Nvme$subsystem", 00:12:37.771 "trtype": "$TEST_TRANSPORT", 00:12:37.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:37.771 "adrfam": "ipv4", 00:12:37.771 "trsvcid": "$NVMF_PORT", 00:12:37.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:37.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:37.771 "hdgst": ${hdgst:-false}, 00:12:37.771 "ddgst": ${ddgst:-false} 00:12:37.771 }, 00:12:37.771 "method": "bdev_nvme_attach_controller" 00:12:37.771 } 00:12:37.771 EOF 00:12:37.771 )") 00:12:37.771 [2024-10-07 14:23:01.183387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-10-07 14:23:01.183421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 14:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:12:37.771 14:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:12:37.771 14:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:12:37.771 14:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:37.771 "params": { 00:12:37.771 "name": "Nvme1", 00:12:37.771 "trtype": "tcp", 00:12:37.771 "traddr": "10.0.0.2", 00:12:37.771 "adrfam": "ipv4", 00:12:37.771 "trsvcid": "4420", 00:12:37.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:37.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:37.771 "hdgst": false, 00:12:37.771 "ddgst": false 00:12:37.771 }, 00:12:37.771 "method": "bdev_nvme_attach_controller" 00:12:37.771 }' 00:12:37.771 [2024-10-07 14:23:01.195363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-10-07 14:23:01.195383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-10-07 14:23:01.207400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-10-07 14:23:01.207418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-10-07 14:23:01.219420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-10-07 14:23:01.219437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-10-07 14:23:01.231440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-10-07 14:23:01.231457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-10-07 14:23:01.243482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-10-07 14:23:01.243500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-10-07 14:23:01.255513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.771 [2024-10-07 14:23:01.255531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.771 [2024-10-07 14:23:01.266662] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:12:37.772 [2024-10-07 14:23:01.266759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2861646 ] 00:12:37.772 [2024-10-07 14:23:01.267544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.772 [2024-10-07 14:23:01.267561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.772 [2024-10-07 14:23:01.279570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.772 [2024-10-07 14:23:01.279587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.772 [2024-10-07 14:23:01.291598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.772 [2024-10-07 14:23:01.291614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.772 [2024-10-07 14:23:01.303638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.772 [2024-10-07 14:23:01.303656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.772 [2024-10-07 14:23:01.315667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.772 [2024-10-07 14:23:01.315684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.772 [2024-10-07 14:23:01.327688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.772 [2024-10-07 14:23:01.327704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.772 [2024-10-07 14:23:01.339736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.772 [2024-10-07 14:23:01.339753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.772 [2024-10-07 14:23:01.351763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.772 [2024-10-07 14:23:01.351779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.772 [2024-10-07 14:23:01.363779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.772 [2024-10-07 14:23:01.363795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.772 [2024-10-07 14:23:01.375818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.772 [2024-10-07 14:23:01.375835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.772 [2024-10-07 14:23:01.381793] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.772 [2024-10-07 14:23:01.387843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.772 [2024-10-07 14:23:01.387860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.772 [2024-10-07 14:23:01.399885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.772 [2024-10-07 14:23:01.399902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.772 [2024-10-07 14:23:01.411923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.772 [2024-10-07 14:23:01.411939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.772 [2024-10-07 14:23:01.423938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.772 [2024-10-07 14:23:01.423954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.772 [2024-10-07 14:23:01.435979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.772 [2024-10-07 14:23:01.435995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.772 [2024-10-07 14:23:01.448018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.772 [2024-10-07 14:23:01.448034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.772 [2024-10-07 14:23:01.460038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.772 [2024-10-07 14:23:01.460054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:37.772 [2024-10-07 14:23:01.472079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:37.772 [2024-10-07 14:23:01.472096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-10-07 14:23:01.484099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-10-07 14:23:01.484116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-10-07 14:23:01.496139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-10-07 14:23:01.496155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-10-07 14:23:01.508180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-10-07 14:23:01.508198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-10-07 14:23:01.520191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-10-07 14:23:01.520207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-10-07 14:23:01.532234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-10-07 14:23:01.532250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-10-07 14:23:01.544268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-10-07 14:23:01.544285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-10-07 14:23:01.556289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-10-07 14:23:01.556306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-10-07 14:23:01.562009] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.032 [2024-10-07 14:23:01.568325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-10-07 14:23:01.568341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-10-07 14:23:01.580348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-10-07 14:23:01.580366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-10-07 14:23:01.592388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-10-07 14:23:01.592404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-10-07 14:23:01.604418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-10-07 14:23:01.604438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-10-07 14:23:01.616441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-10-07 14:23:01.616458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.032 [2024-10-07 14:23:01.628488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.032 [2024-10-07 14:23:01.628506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.033 [2024-10-07 14:23:01.640520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.033 [2024-10-07 14:23:01.640539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.033 [2024-10-07 14:23:01.652547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.033 [2024-10-07 14:23:01.652565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.033 [2024-10-07 14:23:01.664575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.033 [2024-10-07 14:23:01.664592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.033 [2024-10-07 14:23:01.676598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.033 [2024-10-07 14:23:01.676615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.033 [2024-10-07 14:23:01.688641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.033 [2024-10-07 14:23:01.688658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.033 [2024-10-07 14:23:01.700679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.033 [2024-10-07 14:23:01.700694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.033 [2024-10-07 14:23:01.712692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.033 [2024-10-07 14:23:01.712708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.033 [2024-10-07 14:23:01.724738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.033 [2024-10-07 14:23:01.724755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.033 [2024-10-07 14:23:01.736768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.033 [2024-10-07 14:23:01.736784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.293 [2024-10-07 14:23:01.748791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.293 [2024-10-07 14:23:01.748807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.293 [2024-10-07 14:23:01.760831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.293 [2024-10-07 14:23:01.760847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.293 [2024-10-07 14:23:01.772854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.293 [2024-10-07 14:23:01.772870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.293 [2024-10-07 14:23:01.784895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.293 [2024-10-07 14:23:01.784910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.293 [2024-10-07 14:23:01.796926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.293 [2024-10-07 14:23:01.796942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.293 [2024-10-07 14:23:01.808952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.293 [2024-10-07 14:23:01.808968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.293 [2024-10-07 14:23:01.821007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.293 [2024-10-07 14:23:01.821026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.293 [2024-10-07 14:23:01.833044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.293 [2024-10-07 14:23:01.833061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.293 [2024-10-07 14:23:01.845057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.293 [2024-10-07 14:23:01.845074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.293 [2024-10-07 14:23:01.857095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.293 [2024-10-07 14:23:01.857111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.293 [2024-10-07 14:23:01.869114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.293 [2024-10-07 14:23:01.869130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.293 [2024-10-07 14:23:01.881161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.293 [2024-10-07 14:23:01.881177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.293 [2024-10-07 14:23:01.893192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.293 [2024-10-07 14:23:01.893209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.293 [2024-10-07 14:23:01.905210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.293 [2024-10-07 14:23:01.905227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.293 [2024-10-07 14:23:01.917256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.293 [2024-10-07 14:23:01.917272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.293 [2024-10-07 14:23:01.929286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.293 [2024-10-07 14:23:01.929302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.293 [2024-10-07 14:23:01.941313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.293 [2024-10-07 14:23:01.941330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.293 [2024-10-07 14:23:01.953355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.294 [2024-10-07 14:23:01.953371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.294 [2024-10-07 14:23:01.965382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.294 [2024-10-07 14:23:01.965398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.294 [2024-10-07 14:23:01.977428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.294 [2024-10-07 14:23:01.977445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.294 [2024-10-07 14:23:01.989449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.294 [2024-10-07 14:23:01.989466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.294 [2024-10-07 14:23:02.001473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.294 [2024-10-07 14:23:02.001489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.554 [2024-10-07 14:23:02.013519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.554 [2024-10-07 14:23:02.013536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.554 [2024-10-07 14:23:02.025551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.554 [2024-10-07 14:23:02.025567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.554 [2024-10-07 14:23:02.073547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.554 [2024-10-07 14:23:02.073567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.554 [2024-10-07 14:23:02.081713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.554 [2024-10-07 14:23:02.081730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.554 Running I/O for 5 seconds... 00:12:38.554 [2024-10-07 14:23:02.098190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.554 [2024-10-07 14:23:02.098210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.554 [2024-10-07 14:23:02.111803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.554 [2024-10-07 14:23:02.111823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.554 [2024-10-07 14:23:02.125769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.554 [2024-10-07 14:23:02.125789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.554 [2024-10-07 14:23:02.140024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.554 [2024-10-07 14:23:02.140043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.554 [2024-10-07 14:23:02.150832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.554 [2024-10-07 14:23:02.150851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.554 [2024-10-07 14:23:02.164881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.554 [2024-10-07 14:23:02.164902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.554 [2024-10-07 14:23:02.178542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.554 [2024-10-07 14:23:02.178561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.554 [2024-10-07 14:23:02.192819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.554 [2024-10-07 14:23:02.192838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.554 [2024-10-07 14:23:02.204068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.554 [2024-10-07 14:23:02.204086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.554 [2024-10-07 14:23:02.218053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.554 [2024-10-07 14:23:02.218071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.554 [2024-10-07 14:23:02.231947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.554 [2024-10-07 14:23:02.231965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.554 [2024-10-07 14:23:02.246403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.554 [2024-10-07 14:23:02.246422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.554 [2024-10-07 14:23:02.261646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.554 [2024-10-07 14:23:02.261665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.814 [2024-10-07 14:23:02.275509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.814 [2024-10-07 14:23:02.275528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.814 [2024-10-07 14:23:02.289647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.814 [2024-10-07 14:23:02.289666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.814 [2024-10-07 14:23:02.301141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.814 [2024-10-07 14:23:02.301160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.814 [2024-10-07 14:23:02.314915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.814 [2024-10-07 14:23:02.314933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.814 [2024-10-07 14:23:02.329138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.814 [2024-10-07 14:23:02.329156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.814 [2024-10-07 14:23:02.344403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.814 [2024-10-07 14:23:02.344422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.814 [2024-10-07 14:23:02.358236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.814 [2024-10-07 14:23:02.358254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.814 [2024-10-07 14:23:02.371804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.814 [2024-10-07 14:23:02.371822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.814 [2024-10-07 14:23:02.385860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.814 [2024-10-07 14:23:02.385897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.814 [2024-10-07 14:23:02.399687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.814 [2024-10-07 14:23:02.399706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.814 [2024-10-07 14:23:02.411163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.814 [2024-10-07 14:23:02.411183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.814 [2024-10-07 14:23:02.425260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.814 [2024-10-07 14:23:02.425279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.814 [2024-10-07 14:23:02.439042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.814 [2024-10-07 14:23:02.439060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.814 [2024-10-07 14:23:02.453167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.814 [2024-10-07 14:23:02.453185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.814 [2024-10-07 14:23:02.466936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.814 [2024-10-07 14:23:02.466954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.814 [2024-10-07 14:23:02.480772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.814 [2024-10-07 14:23:02.480790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.814 [2024-10-07 14:23:02.495055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.814 [2024-10-07 14:23:02.495073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:38.814 [2024-10-07 14:23:02.510882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:38.814 [2024-10-07 14:23:02.510900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.524822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.524842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.538667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.538685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.552504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.552523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.566605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.566623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.577800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.577818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.591784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.591802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.605446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.605469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.619228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.619246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.632935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.632954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.647006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.647025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.660308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.660327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.674437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.674456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.688067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.688086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.702092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.702111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.715733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.715755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.729312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.729331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.743257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.743275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.756801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.756820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.770853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.770873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.076 [2024-10-07 14:23:02.784730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.076 [2024-10-07 14:23:02.784749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:02.798334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:02.798353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:02.812224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:02.812244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:02.825960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:02.825979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:02.839842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:02.839861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:02.853588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:02.853607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:02.867321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:02.867344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:02.881270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:02.881288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:02.894523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:02.894542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:02.908077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:02.908096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:02.921907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:02.921927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:02.935827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:02.935847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:02.947083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:02.947102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:02.961162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:02.961181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:02.974814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:02.974832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:02.988730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:02.988750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:03.003163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:03.003182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:03.018857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:03.018876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:03.032981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:03.033007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.337 [2024-10-07 14:23:03.046980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.337 [2024-10-07 14:23:03.047005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.597 [2024-10-07 14:23:03.060943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.598 [2024-10-07 14:23:03.060962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.598 [2024-10-07 14:23:03.072320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.598 [2024-10-07 14:23:03.072339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.598 [2024-10-07 14:23:03.086262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.598 [2024-10-07 14:23:03.086281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.598 17138.00 IOPS, 133.89 MiB/s [2024-10-07T12:23:03.307Z] [2024-10-07 14:23:03.100497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.598 [2024-10-07 14:23:03.100516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.598 [2024-10-07 14:23:03.115844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.598 [2024-10-07 14:23:03.115863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.598 [2024-10-07 14:23:03.129758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.598 [2024-10-07 14:23:03.129782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.598 [2024-10-07 14:23:03.142873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.598 [2024-10-07 14:23:03.142892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.598 [2024-10-07 14:23:03.156771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.598 [2024-10-07 14:23:03.156790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.598 [2024-10-07 14:23:03.170896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.598 [2024-10-07 14:23:03.170915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.598 [2024-10-07 14:23:03.186400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.598 [2024-10-07 14:23:03.186420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.598 [2024-10-07 14:23:03.200382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.598 [2024-10-07 14:23:03.200401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.598 [2024-10-07 14:23:03.214180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.598 [2024-10-07 14:23:03.214199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.598 [2024-10-07 14:23:03.228057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.598 [2024-10-07 14:23:03.228076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.598 [2024-10-07 14:23:03.241856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.598 [2024-10-07 14:23:03.241876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.598 [2024-10-07 14:23:03.253244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.598 [2024-10-07 14:23:03.253264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.598 [2024-10-07 14:23:03.267655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.598 [2024-10-07 14:23:03.267680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.598 [2024-10-07 14:23:03.281481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.598 [2024-10-07 14:23:03.281500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.598 [2024-10-07 14:23:03.294953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.598 [2024-10-07 14:23:03.294972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.308529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.308548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.322688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.322707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.334204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.334223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.348324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.348342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.362353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.362371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.376648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.376667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.388123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.388142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.402318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.402336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.415828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.415846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.429756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.429774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.443695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.443714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.457795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.457814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.469198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.469217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.483464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.483482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.497384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.497402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.511410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.511429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.522909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.522928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.537038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.537056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.550535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.550554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:39.859 [2024-10-07 14:23:03.564453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:39.859 [2024-10-07 14:23:03.564471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.120 [2024-10-07 14:23:03.578046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.120 [2024-10-07 14:23:03.578065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.120 [2024-10-07 14:23:03.591928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.120 [2024-10-07 14:23:03.591946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.120 [2024-10-07 14:23:03.605696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.120 [2024-10-07 14:23:03.605715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.120 [2024-10-07 14:23:03.619257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.120 [2024-10-07 14:23:03.619275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.120 [2024-10-07 14:23:03.632702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.120 [2024-10-07 14:23:03.632721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.120 [2024-10-07 14:23:03.646489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.120 [2024-10-07 14:23:03.646508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.120 [2024-10-07 14:23:03.660083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.120 [2024-10-07 14:23:03.660102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.120 [2024-10-07 14:23:03.674355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.120 [2024-10-07 14:23:03.674372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.120 [2024-10-07 14:23:03.690493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.120 [2024-10-07 14:23:03.690513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.120 [2024-10-07 14:23:03.704262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.120 [2024-10-07 14:23:03.704281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.120 [2024-10-07 14:23:03.718095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.120 [2024-10-07 14:23:03.718114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.120 [2024-10-07 14:23:03.731928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.120 [2024-10-07 14:23:03.731947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.120 [2024-10-07 14:23:03.745745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.120 [2024-10-07 14:23:03.745764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.120 [2024-10-07 14:23:03.759107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.120 [2024-10-07 14:23:03.759125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.120 [2024-10-07 14:23:03.772551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.120 [2024-10-07 14:23:03.772570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.120 [2024-10-07 14:23:03.786540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.120 [2024-10-07 14:23:03.786559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.120 [2024-10-07 14:23:03.800449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.120 [2024-10-07 14:23:03.800467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.120 [2024-10-07 14:23:03.814746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.120 [2024-10-07 14:23:03.814765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:03.830124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:03.830144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:03.843791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:03.843811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:03.857498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:03.857517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:03.871530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:03.871550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:03.885490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:03.885509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:03.899485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:03.899503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:03.913157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:03.913176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:03.927070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:03.927088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:03.940910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:03.940928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:03.954760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:03.954778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:03.968302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:03.968321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:03.982150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:03.982169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:03.996406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:03.996424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:04.012252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:04.012270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:04.026182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:04.026201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:04.039814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:04.039834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:04.053869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:04.053888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:04.067688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:04.067706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.381 [2024-10-07 14:23:04.081629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.381 [2024-10-07 14:23:04.081648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.641 17204.50 IOPS, 134.41 MiB/s [2024-10-07T12:23:04.350Z] [2024-10-07 14:23:04.095760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.641 [2024-10-07 14:23:04.095778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.641 [2024-10-07 14:23:04.106971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.641 [2024-10-07 14:23:04.106989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.641 [2024-10-07 14:23:04.121485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.641 [2024-10-07 14:23:04.121503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.641 [2024-10-07 14:23:04.135364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.641 [2024-10-07 14:23:04.135382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.641 [2024-10-07 14:23:04.149421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.641 [2024-10-07 14:23:04.149446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.641 [2024-10-07 14:23:04.163143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.641 [2024-10-07 14:23:04.163165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.641 [2024-10-07 14:23:04.177077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.641 [2024-10-07 14:23:04.177096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.641 [2024-10-07 14:23:04.191048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.641 [2024-10-07 14:23:04.191066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.641 [2024-10-07 14:23:04.202292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.641 [2024-10-07 14:23:04.202310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.641 [2024-10-07 14:23:04.216764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.641 [2024-10-07 14:23:04.216783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.641 [2024-10-07 14:23:04.230664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.641 [2024-10-07 14:23:04.230683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.641 [2024-10-07 14:23:04.244407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.641 [2024-10-07 14:23:04.244425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.641 [2024-10-07 14:23:04.258126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.641 [2024-10-07 14:23:04.258144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.641 [2024-10-07 14:23:04.271786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.641 [2024-10-07 14:23:04.271805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.641 [2024-10-07 14:23:04.285738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.641 [2024-10-07 14:23:04.285756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.641 [2024-10-07 14:23:04.299266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.641 [2024-10-07 14:23:04.299284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.641 [2024-10-07 14:23:04.313036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.642 [2024-10-07 14:23:04.313054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.642 [2024-10-07 14:23:04.326887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.642 [2024-10-07 14:23:04.326906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.642 [2024-10-07 14:23:04.338081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.642 [2024-10-07 14:23:04.338100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.352278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.352297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.366327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.366346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.379958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.379977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.393615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.393634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.407499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.407518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.421270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.421293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.434800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.434819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.448716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.448735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.462481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.462499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.476577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.476597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.490410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.490429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.502472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.502491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.516608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.516627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.530037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.530056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.543826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.543845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.557892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.557911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.568479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.568498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.582065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.582084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.596139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.596157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:40.902 [2024-10-07 14:23:04.608238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:40.902 [2024-10-07 14:23:04.608257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.621879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.621899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.635456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.635474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.649178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.649197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.663046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.663065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.674647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.674670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.688342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.688361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.701626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.701644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.715879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.715899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.727589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.727608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.741492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.741510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.754699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.754719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.768815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.768835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.780194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.780214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.794446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.794466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.807700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.807718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.821952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.821970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.837739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.837759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.851559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.851579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.163 [2024-10-07 14:23:04.865505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.163 [2024-10-07 14:23:04.865524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:04.879056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:04.879076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:04.892980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:04.892999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:04.906912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:04.906931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:04.920458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:04.920476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:04.934577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:04.934600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:04.945990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:04.946018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:04.960256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:04.960275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:04.973926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:04.973946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:04.987036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:04.987056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:05.000579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:05.000599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:05.013996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:05.014027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:05.027560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:05.027579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:05.041309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:05.041328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:05.055450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:05.055469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:05.069416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:05.069435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:05.080337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:05.080355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:05.094658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:05.094677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 17214.67 IOPS, 134.49 MiB/s [2024-10-07T12:23:05.134Z] [2024-10-07 14:23:05.108364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:05.108382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:05.122469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:05.122488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.425 [2024-10-07 14:23:05.133598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.425 [2024-10-07 14:23:05.133616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.147756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.147775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.161449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.161468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.174934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.174953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.188515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.188533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.202404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.202423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.216122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.216141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.229962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.229981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.243882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.243901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.257345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.257364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.270871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.270889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.284838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.284857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.298731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.298750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.313009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.313027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.324369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.324388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.338616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.338634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.352267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.352285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.366082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.366101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.379942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.379961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.686 [2024-10-07 14:23:05.393692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.686 [2024-10-07 14:23:05.393711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.407114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.407133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.421253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.421272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.435066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.435085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.449067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.449086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.460724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.460743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.474701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.474719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.488046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.488065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.502019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.502038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.515966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.515985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.529924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.529942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.544005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.544024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.557760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.557780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.571438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.571457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.585301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.585320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.598991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.599014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.612638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.612656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.626726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.626744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.640593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.640611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:41.947 [2024-10-07 14:23:05.651943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:41.947 [2024-10-07 14:23:05.651962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.207 [2024-10-07 14:23:05.666072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.207 [2024-10-07 14:23:05.666091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.207 [2024-10-07 14:23:05.680274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.207 [2024-10-07 14:23:05.680292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.207 [2024-10-07 14:23:05.695819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.207 [2024-10-07 14:23:05.695842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.207 [2024-10-07 14:23:05.709511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.207 [2024-10-07 14:23:05.709530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.207 [2024-10-07 14:23:05.723347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.207 [2024-10-07 14:23:05.723365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.207 [2024-10-07 14:23:05.737281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.207 [2024-10-07 14:23:05.737300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.207 [2024-10-07 14:23:05.751144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.207 [2024-10-07 14:23:05.751163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.207 [2024-10-07 14:23:05.762498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.207 [2024-10-07 14:23:05.762518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.207 [2024-10-07 14:23:05.776176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.207 [2024-10-07 14:23:05.776195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.208 [2024-10-07 14:23:05.789651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.208 [2024-10-07 14:23:05.789669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.208 [2024-10-07 14:23:05.803406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.208 [2024-10-07 14:23:05.803424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.208 [2024-10-07 14:23:05.817629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.208 [2024-10-07 14:23:05.817647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.208 [2024-10-07 14:23:05.833036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.208 [2024-10-07 14:23:05.833055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.208 [2024-10-07 14:23:05.847247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.208 [2024-10-07 14:23:05.847265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.208 [2024-10-07 14:23:05.862853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.208 [2024-10-07 14:23:05.862872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.208 [2024-10-07 14:23:05.876572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.208 [2024-10-07 14:23:05.876591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.208 [2024-10-07 14:23:05.890672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.208 [2024-10-07 14:23:05.890696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.208 [2024-10-07 14:23:05.905793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.208 [2024-10-07 14:23:05.905811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 [2024-10-07 14:23:05.920474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:05.920492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 [2024-10-07 14:23:05.936207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:05.936225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 [2024-10-07 14:23:05.949548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:05.949566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 [2024-10-07 14:23:05.963632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:05.963656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 [2024-10-07 14:23:05.977561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:05.977580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 [2024-10-07 14:23:05.991441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:05.991459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 [2024-10-07 14:23:06.005099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:06.005118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 [2024-10-07 14:23:06.018872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:06.018890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 [2024-10-07 14:23:06.032686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:06.032704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 [2024-10-07 14:23:06.046486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:06.046504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 [2024-10-07 14:23:06.060188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:06.060206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 [2024-10-07 14:23:06.073940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:06.073960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 [2024-10-07 14:23:06.087995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:06.088023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 [2024-10-07 14:23:06.099206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:06.099226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 17224.25 IOPS, 134.56 MiB/s [2024-10-07T12:23:06.178Z] [2024-10-07 14:23:06.113240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:06.113259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 [2024-10-07 14:23:06.126677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:06.126696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 [2024-10-07 14:23:06.140593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:06.140612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 [2024-10-07 14:23:06.154316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:06.154334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.469 [2024-10-07 14:23:06.168101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.469 [2024-10-07 14:23:06.168119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.730 [2024-10-07 14:23:06.181890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.730 [2024-10-07 14:23:06.181909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.730 [2024-10-07 14:23:06.195013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.730 [2024-10-07 14:23:06.195032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.730 [2024-10-07 14:23:06.208978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.731 [2024-10-07 14:23:06.208998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.731 [2024-10-07 14:23:06.222601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.731 [2024-10-07 14:23:06.222625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.731 [2024-10-07 14:23:06.236363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.731 [2024-10-07 14:23:06.236382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.731 [2024-10-07 14:23:06.249924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.731 [2024-10-07 14:23:06.249943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.731 [2024-10-07 14:23:06.264238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.731 [2024-10-07 14:23:06.264256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.731 [2024-10-07 14:23:06.280358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.731 [2024-10-07 14:23:06.280377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.731 [2024-10-07 14:23:06.294225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.731 [2024-10-07 14:23:06.294245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.731 [2024-10-07 14:23:06.307918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.731 [2024-10-07 14:23:06.307938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.731 [2024-10-07 14:23:06.321996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.731 [2024-10-07 14:23:06.322021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.731 [2024-10-07 14:23:06.335466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.731 [2024-10-07 14:23:06.335486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.731 [2024-10-07 14:23:06.349279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.731 [2024-10-07 14:23:06.349298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.731 [2024-10-07 14:23:06.363046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.731 [2024-10-07 14:23:06.363065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.731 [2024-10-07 14:23:06.376733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.731 [2024-10-07 14:23:06.376752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.731 [2024-10-07 14:23:06.390623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.731 [2024-10-07 14:23:06.390643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.731 [2024-10-07 14:23:06.404460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.731 [2024-10-07 14:23:06.404478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.731 [2024-10-07 14:23:06.418204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.731 [2024-10-07 14:23:06.418223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.731 [2024-10-07 14:23:06.431572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.731 [2024-10-07 14:23:06.431591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.992 [2024-10-07 14:23:06.445396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.445415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.993 [2024-10-07 14:23:06.458983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.459007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.993 [2024-10-07 14:23:06.473185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.473204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.993 [2024-10-07 14:23:06.486958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.486977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.993 [2024-10-07 14:23:06.500903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.500922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.993 [2024-10-07 14:23:06.514638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.514658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.993 [2024-10-07 14:23:06.528534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.528553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.993 [2024-10-07 14:23:06.542797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.542817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.993 [2024-10-07 14:23:06.553825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.553844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.993 [2024-10-07 14:23:06.568165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.568185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.993 [2024-10-07 14:23:06.582126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.582144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.993 [2024-10-07 14:23:06.596251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.596270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.993 [2024-10-07 14:23:06.609920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.609939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.993 [2024-10-07 14:23:06.623506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.623526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.993 [2024-10-07 14:23:06.637601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.637620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.993 [2024-10-07 14:23:06.648763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.648781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.993 [2024-10-07 14:23:06.662979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.662997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.993 [2024-10-07 14:23:06.677008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.677027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:42.993 [2024-10-07 14:23:06.689257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:42.993 [2024-10-07 14:23:06.689276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.254 [2024-10-07 14:23:06.702916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.254 [2024-10-07 14:23:06.702935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.254 [2024-10-07 14:23:06.716647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.254 [2024-10-07 14:23:06.716666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.254 [2024-10-07 14:23:06.729874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.254 [2024-10-07 14:23:06.729893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.254 [2024-10-07 14:23:06.743527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.254 [2024-10-07 14:23:06.743546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.254 [2024-10-07 14:23:06.757655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.254 [2024-10-07 14:23:06.757674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.254 [2024-10-07 14:23:06.768956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.254 [2024-10-07 14:23:06.768982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.254 [2024-10-07 14:23:06.783951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.254 [2024-10-07 14:23:06.783971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.254 [2024-10-07 14:23:06.795415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.254 [2024-10-07 14:23:06.795434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.254 [2024-10-07 14:23:06.809307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.254 [2024-10-07 14:23:06.809326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.254 [2024-10-07 14:23:06.822936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.254 [2024-10-07 14:23:06.822955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.254 [2024-10-07 14:23:06.836669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.254 [2024-10-07 14:23:06.836687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.254 [2024-10-07 14:23:06.850848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.254 [2024-10-07 14:23:06.850866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.254 [2024-10-07 14:23:06.864467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.254 [2024-10-07 14:23:06.864485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.254 [2024-10-07 14:23:06.878143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.254 [2024-10-07 14:23:06.878161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.254 [2024-10-07 14:23:06.892022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.254 [2024-10-07 14:23:06.892041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.254 [2024-10-07 14:23:06.903266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.254 [2024-10-07 14:23:06.903284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.254 [2024-10-07 14:23:06.917010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.255 [2024-10-07 14:23:06.917029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.255 [2024-10-07 14:23:06.931324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.255 [2024-10-07 14:23:06.931342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.255 [2024-10-07 14:23:06.946371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.255 [2024-10-07 14:23:06.946390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.255 [2024-10-07 14:23:06.960460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.255 [2024-10-07 14:23:06.960478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.516 [2024-10-07 14:23:06.974153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.516 [2024-10-07 14:23:06.974171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.516 [2024-10-07 14:23:06.988446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.516 [2024-10-07 14:23:06.988464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.516 [2024-10-07 14:23:07.004486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.516 [2024-10-07 14:23:07.004505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.516 [2024-10-07 14:23:07.018366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.516 [2024-10-07 14:23:07.018384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.516 [2024-10-07 14:23:07.032287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.516 [2024-10-07 14:23:07.032305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.516 [2024-10-07 14:23:07.045584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.516 [2024-10-07 14:23:07.045602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.516 [2024-10-07 14:23:07.059780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.516 [2024-10-07 14:23:07.059800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.516 [2024-10-07 14:23:07.071047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.516 [2024-10-07 14:23:07.071067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.516 [2024-10-07 14:23:07.085148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.516 [2024-10-07 14:23:07.085167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.516 [2024-10-07 14:23:07.098445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.516 [2024-10-07 14:23:07.098464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.516 17222.20 IOPS, 134.55 MiB/s 00:12:43.516 Latency(us) 00:12:43.516 [2024-10-07T12:23:07.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.516 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:43.516 Nvme1n1 : 5.01 17226.64 134.58 0.00 0.00 7423.73 3386.03 15400.96 00:12:43.516 [2024-10-07T12:23:07.225Z] =================================================================================================================== 00:12:43.516 [2024-10-07T12:23:07.225Z] Total : 17226.64 134.58 0.00 0.00 7423.73 3386.03 15400.96 00:12:43.516 [2024-10-07 14:23:07.108609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.516 [2024-10-07 14:23:07.108627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.516 [2024-10-07 14:23:07.120645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.516 [2024-10-07 14:23:07.120663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.516 [2024-10-07 14:23:07.132659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.516 [2024-10-07 14:23:07.132675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.516 [2024-10-07 14:23:07.144717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.516 [2024-10-07 14:23:07.144737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.517 [2024-10-07 14:23:07.156736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.517 [2024-10-07 14:23:07.156753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.517 [2024-10-07 14:23:07.168756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.517 [2024-10-07 14:23:07.168772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.517 [2024-10-07 14:23:07.180987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.517 [2024-10-07 14:23:07.181008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.517 [2024-10-07 14:23:07.192827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.517 [2024-10-07 14:23:07.192846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.517 [2024-10-07 14:23:07.204853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.517 [2024-10-07 14:23:07.204870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.517 [2024-10-07 14:23:07.216899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.517 [2024-10-07 14:23:07.216916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.228910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.228927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.240957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.240973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.252984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.253005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.265008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.265024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.277050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.277065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.289092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.289108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.301101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.301117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.313144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.313160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.325170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.325187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.337210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.337226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.349242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.349258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.361280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.361299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.373302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.373319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.385332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.385348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.397350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.397367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.409402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.409421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.421410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.421430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.433469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.433485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.445482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.445498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.457507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.457523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.469546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.469562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.778 [2024-10-07 14:23:07.481584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.778 [2024-10-07 14:23:07.481601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.039 [2024-10-07 14:23:07.493600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.039 [2024-10-07 14:23:07.493617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.039 [2024-10-07 14:23:07.505648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.039 [2024-10-07 14:23:07.505664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.039 [2024-10-07 14:23:07.517662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.039 [2024-10-07 14:23:07.517679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.039 [2024-10-07 14:23:07.529702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.039 [2024-10-07 14:23:07.529718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.039 [2024-10-07 14:23:07.541732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.039 [2024-10-07 14:23:07.541749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.039 [2024-10-07 14:23:07.553760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.039 [2024-10-07 14:23:07.553777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.039 [2024-10-07 14:23:07.565813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.039 [2024-10-07 14:23:07.565829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.039 [2024-10-07 14:23:07.577826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.039 [2024-10-07 14:23:07.577846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.039 [2024-10-07 14:23:07.589847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.040 [2024-10-07 14:23:07.589863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.040 [2024-10-07 14:23:07.601895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.040 [2024-10-07 14:23:07.601911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.040 [2024-10-07 14:23:07.613913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.040 [2024-10-07 14:23:07.613930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.040 [2024-10-07 14:23:07.625950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.040 [2024-10-07 14:23:07.625966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.040 [2024-10-07 14:23:07.637982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.040 [2024-10-07 14:23:07.637998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.040 [2024-10-07 14:23:07.650020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.040 [2024-10-07 14:23:07.650040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.040 [2024-10-07 14:23:07.662047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.040 [2024-10-07 14:23:07.662063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.040 [2024-10-07 14:23:07.674083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.040 [2024-10-07 14:23:07.674099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.040 [2024-10-07 14:23:07.686101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.040 [2024-10-07 14:23:07.686118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.040 [2024-10-07 14:23:07.698142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.040 [2024-10-07 14:23:07.698158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.040 [2024-10-07 14:23:07.710164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.040 [2024-10-07 14:23:07.710181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.040 [2024-10-07 14:23:07.730227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.040 [2024-10-07 14:23:07.730243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.040 [2024-10-07 14:23:07.742258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.040 [2024-10-07 14:23:07.742274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.301 [2024-10-07 14:23:07.754281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.301 [2024-10-07 14:23:07.754297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.301 [2024-10-07 14:23:07.766322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.301 [2024-10-07 14:23:07.766339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.301 [2024-10-07 14:23:07.778352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.301 [2024-10-07 14:23:07.778368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.301 [2024-10-07 14:23:07.790375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.301 [2024-10-07 14:23:07.790392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.301 [2024-10-07 14:23:07.802427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.301 [2024-10-07 14:23:07.802444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.301 [2024-10-07 14:23:07.814440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.301 [2024-10-07 14:23:07.814457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2861646) - No such process 00:12:44.301 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2861646 00:12:44.301 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.301 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.301 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:44.301 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.301 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:44.301 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.301 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:44.301 delay0 00:12:44.301 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.301 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:44.301 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.301 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:44.301 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.301 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:44.563 [2024-10-07 14:23:08.033146] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:52.702 Initializing NVMe Controllers 00:12:52.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:52.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:52.702 Initialization complete. Launching workers. 00:12:52.702 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 235, failed: 27954 00:12:52.702 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 28060, failed to submit 129 00:12:52.702 success 27996, unsuccessful 64, failed 0 00:12:52.702 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:52.702 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:52.702 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:52.702 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:52.702 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:52.702 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:52.703 rmmod nvme_tcp 00:12:52.703 rmmod nvme_fabrics 00:12:52.703 rmmod nvme_keyring 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 2859184 ']' 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 2859184 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2859184 ']' 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2859184 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2859184 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2859184' 00:12:52.703 killing process with pid 2859184 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2859184 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2859184 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.703 14:23:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.618 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:54.618 00:12:54.618 real 0m37.166s 00:12:54.618 user 0m50.402s 00:12:54.618 sys 0m11.693s 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:54.618 ************************************ 00:12:54.618 END TEST nvmf_zcopy 00:12:54.618 ************************************ 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:54.618 ************************************ 00:12:54.618 START TEST nvmf_nmic 00:12:54.618 ************************************ 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:54.618 * Looking for test storage... 00:12:54.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:54.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.618 --rc genhtml_branch_coverage=1 00:12:54.618 --rc genhtml_function_coverage=1 00:12:54.618 --rc genhtml_legend=1 00:12:54.618 --rc geninfo_all_blocks=1 00:12:54.618 --rc geninfo_unexecuted_blocks=1 00:12:54.618 00:12:54.618 ' 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:54.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.618 --rc genhtml_branch_coverage=1 00:12:54.618 --rc genhtml_function_coverage=1 00:12:54.618 --rc genhtml_legend=1 00:12:54.618 --rc geninfo_all_blocks=1 00:12:54.618 --rc geninfo_unexecuted_blocks=1 00:12:54.618 00:12:54.618 ' 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:54.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.618 --rc genhtml_branch_coverage=1 00:12:54.618 --rc genhtml_function_coverage=1 00:12:54.618 --rc genhtml_legend=1 00:12:54.618 --rc geninfo_all_blocks=1 00:12:54.618 --rc geninfo_unexecuted_blocks=1 00:12:54.618 00:12:54.618 ' 00:12:54.618 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:54.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.618 --rc genhtml_branch_coverage=1 00:12:54.618 --rc genhtml_function_coverage=1 00:12:54.618 --rc genhtml_legend=1 00:12:54.618 --rc geninfo_all_blocks=1 00:12:54.618 --rc geninfo_unexecuted_blocks=1 00:12:54.618 00:12:54.618 ' 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:54.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:12:54.619 14:23:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.759 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.759 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:13:02.759 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:02.759 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:02.759 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:02.759 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:02.759 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:02.759 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:13:02.759 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:02.759 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:13:02.759 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:13:02.759 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:13:02.759 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:13:02.759 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:02.760 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:02.760 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:02.760 Found net devices under 0000:31:00.0: cvl_0_0 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:02.760 Found net devices under 0000:31:00.1: cvl_0_1 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:02.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.505 ms 00:13:02.760 00:13:02.760 --- 10.0.0.2 ping statistics --- 00:13:02.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.760 rtt min/avg/max/mdev = 0.505/0.505/0.505/0.000 ms 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:13:02.760 00:13:02.760 --- 10.0.0.1 ping statistics --- 00:13:02.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.760 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=2868957 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 2868957 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2868957 ']' 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:02.760 14:23:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:02.760 [2024-10-07 14:23:25.745204] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:13:02.761 [2024-10-07 14:23:25.745334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.761 [2024-10-07 14:23:25.885662] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.761 [2024-10-07 14:23:26.068801] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.761 [2024-10-07 14:23:26.068849] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.761 [2024-10-07 14:23:26.068861] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.761 [2024-10-07 14:23:26.068873] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.761 [2024-10-07 14:23:26.068882] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.761 [2024-10-07 14:23:26.071142] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.761 [2024-10-07 14:23:26.071227] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.761 [2024-10-07 14:23:26.071343] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.761 [2024-10-07 14:23:26.071365] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.022 [2024-10-07 14:23:26.560701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.022 Malloc0 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.022 [2024-10-07 14:23:26.659200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:03.022 test case1: single bdev can't be used in multiple subsystems 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.022 [2024-10-07 14:23:26.695057] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:03.022 [2024-10-07 14:23:26.695092] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:03.022 [2024-10-07 14:23:26.695105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.022 request: 00:13:03.022 { 00:13:03.022 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:03.022 "namespace": { 00:13:03.022 "bdev_name": "Malloc0", 00:13:03.022 "no_auto_visible": false 00:13:03.022 }, 00:13:03.022 "method": "nvmf_subsystem_add_ns", 00:13:03.022 "req_id": 1 00:13:03.022 } 00:13:03.022 Got JSON-RPC error response 00:13:03.022 response: 00:13:03.022 { 00:13:03.022 "code": -32602, 00:13:03.022 "message": "Invalid parameters" 00:13:03.022 } 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:03.022 Adding namespace failed - expected result. 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:03.022 test case2: host connect to nvmf target in multiple paths 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:03.022 [2024-10-07 14:23:26.707241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.022 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.933 14:23:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:06.316 14:23:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.316 14:23:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:06.316 14:23:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.316 14:23:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:06.316 14:23:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:08.232 14:23:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:08.232 14:23:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:08.232 14:23:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.232 14:23:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:08.232 14:23:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.232 14:23:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:08.232 14:23:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:08.232 [global] 00:13:08.232 thread=1 00:13:08.232 invalidate=1 00:13:08.232 rw=write 00:13:08.232 time_based=1 00:13:08.232 runtime=1 00:13:08.232 ioengine=libaio 00:13:08.232 direct=1 00:13:08.232 bs=4096 00:13:08.232 iodepth=1 00:13:08.232 norandommap=0 00:13:08.232 numjobs=1 00:13:08.232 00:13:08.232 verify_dump=1 00:13:08.232 verify_backlog=512 00:13:08.232 verify_state_save=0 00:13:08.232 do_verify=1 00:13:08.232 verify=crc32c-intel 00:13:08.232 [job0] 00:13:08.232 filename=/dev/nvme0n1 00:13:08.232 Could not set queue depth (nvme0n1) 00:13:08.492 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:08.492 fio-3.35 00:13:08.492 Starting 1 thread 00:13:09.875 00:13:09.875 job0: (groupid=0, jobs=1): err= 0: pid=2870365: Mon Oct 7 14:23:33 2024 00:13:09.875 read: IOPS=125, BW=500KiB/s (512kB/s)(508KiB/1016msec) 00:13:09.875 slat (nsec): min=7431, max=44451, avg=25321.73, stdev=4475.73 00:13:09.875 clat (usec): min=545, max=42021, avg=6002.78, stdev=13614.17 00:13:09.875 lat (usec): min=571, max=42046, avg=6028.10, stdev=13613.54 00:13:09.875 clat percentiles (usec): 00:13:09.875 | 1.00th=[ 603], 5.00th=[ 652], 10.00th=[ 693], 20.00th=[ 783], 00:13:09.875 | 30.00th=[ 840], 40.00th=[ 881], 50.00th=[ 898], 60.00th=[ 922], 00:13:09.875 | 70.00th=[ 947], 80.00th=[ 971], 90.00th=[41157], 95.00th=[42206], 00:13:09.875 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:09.875 | 99.99th=[42206] 00:13:09.875 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:13:09.875 slat (nsec): min=9420, max=50959, avg=24062.86, stdev=11575.19 00:13:09.875 clat (usec): min=134, max=804, avg=457.17, stdev=87.67 00:13:09.875 lat (usec): min=146, max=837, avg=481.23, stdev=88.27 00:13:09.875 clat percentiles (usec): 00:13:09.875 | 1.00th=[ 253], 5.00th=[ 318], 10.00th=[ 338], 20.00th=[ 388], 00:13:09.875 | 30.00th=[ 424], 40.00th=[ 449], 50.00th=[ 465], 60.00th=[ 478], 00:13:09.875 | 70.00th=[ 490], 80.00th=[ 515], 90.00th=[ 570], 95.00th=[ 594], 00:13:09.875 | 99.00th=[ 652], 99.50th=[ 709], 99.90th=[ 807], 99.95th=[ 807], 00:13:09.875 | 99.99th=[ 807] 00:13:09.875 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:09.875 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:09.875 lat (usec) : 250=0.78%, 500=58.37%, 750=23.79%, 1000=14.40% 00:13:09.875 lat (msec) : 2=0.16%, 50=2.50% 00:13:09.875 cpu : usr=0.49%, sys=1.77%, ctx=639, majf=0, minf=1 00:13:09.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:09.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.875 issued rwts: total=127,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:09.875 00:13:09.875 Run status group 0 (all jobs): 00:13:09.875 READ: bw=500KiB/s (512kB/s), 500KiB/s-500KiB/s (512kB/s-512kB/s), io=508KiB (520kB), run=1016-1016msec 00:13:09.875 WRITE: bw=2016KiB/s (2064kB/s), 2016KiB/s-2016KiB/s (2064kB/s-2064kB/s), io=2048KiB (2097kB), run=1016-1016msec 00:13:09.875 00:13:09.875 Disk stats (read/write): 00:13:09.875 nvme0n1: ios=174/512, merge=0/0, ticks=700/230, in_queue=930, util=93.39% 00:13:09.875 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:10.134 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.134 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:10.134 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:10.134 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.134 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:10.134 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.134 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:10.134 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:10.134 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:10.134 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:10.134 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:13:10.134 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:10.134 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:13:10.134 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:10.134 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:10.134 rmmod nvme_tcp 00:13:10.134 rmmod nvme_fabrics 00:13:10.395 rmmod nvme_keyring 00:13:10.395 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:10.395 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:13:10.395 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:13:10.395 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 2868957 ']' 00:13:10.395 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 2868957 00:13:10.395 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2868957 ']' 00:13:10.395 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2868957 00:13:10.395 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:13:10.395 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:10.395 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2868957 00:13:10.395 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:10.395 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:10.395 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2868957' 00:13:10.395 killing process with pid 2868957 00:13:10.395 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2868957 00:13:10.395 14:23:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2868957 00:13:11.336 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:11.337 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:11.337 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:11.337 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:13:11.337 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:13:11.337 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:11.337 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:13:11.337 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:11.337 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:11.337 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.337 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.337 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.880 14:23:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:13.881 00:13:13.881 real 0m18.912s 00:13:13.881 user 0m48.382s 00:13:13.881 sys 0m6.629s 00:13:13.881 14:23:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.881 14:23:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:13.881 ************************************ 00:13:13.881 END TEST nvmf_nmic 00:13:13.881 ************************************ 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:13.881 ************************************ 00:13:13.881 START TEST nvmf_fio_target 00:13:13.881 ************************************ 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:13.881 * Looking for test storage... 00:13:13.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:13.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.881 --rc genhtml_branch_coverage=1 00:13:13.881 --rc genhtml_function_coverage=1 00:13:13.881 --rc genhtml_legend=1 00:13:13.881 --rc geninfo_all_blocks=1 00:13:13.881 --rc geninfo_unexecuted_blocks=1 00:13:13.881 00:13:13.881 ' 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:13.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.881 --rc genhtml_branch_coverage=1 00:13:13.881 --rc genhtml_function_coverage=1 00:13:13.881 --rc genhtml_legend=1 00:13:13.881 --rc geninfo_all_blocks=1 00:13:13.881 --rc geninfo_unexecuted_blocks=1 00:13:13.881 00:13:13.881 ' 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:13.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.881 --rc genhtml_branch_coverage=1 00:13:13.881 --rc genhtml_function_coverage=1 00:13:13.881 --rc genhtml_legend=1 00:13:13.881 --rc geninfo_all_blocks=1 00:13:13.881 --rc geninfo_unexecuted_blocks=1 00:13:13.881 00:13:13.881 ' 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:13.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.881 --rc genhtml_branch_coverage=1 00:13:13.881 --rc genhtml_function_coverage=1 00:13:13.881 --rc genhtml_legend=1 00:13:13.881 --rc geninfo_all_blocks=1 00:13:13.881 --rc geninfo_unexecuted_blocks=1 00:13:13.881 00:13:13.881 ' 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.881 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:13.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:13.882 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:22.035 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:22.035 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:22.035 Found net devices under 0000:31:00.0: cvl_0_0 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:22.035 Found net devices under 0000:31:00.1: cvl_0_1 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:22.035 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:22.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:13:22.036 00:13:22.036 --- 10.0.0.2 ping statistics --- 00:13:22.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.036 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:13:22.036 00:13:22.036 --- 10.0.0.1 ping statistics --- 00:13:22.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.036 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=2875250 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 2875250 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2875250 ']' 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:22.036 14:23:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.036 [2024-10-07 14:23:45.030515] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:13:22.036 [2024-10-07 14:23:45.030637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.036 [2024-10-07 14:23:45.175071] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.036 [2024-10-07 14:23:45.363087] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.036 [2024-10-07 14:23:45.363137] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.036 [2024-10-07 14:23:45.363149] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.036 [2024-10-07 14:23:45.363162] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.036 [2024-10-07 14:23:45.363175] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.036 [2024-10-07 14:23:45.365441] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.036 [2024-10-07 14:23:45.365525] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.036 [2024-10-07 14:23:45.365643] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.036 [2024-10-07 14:23:45.365666] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.297 14:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:22.297 14:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:13:22.297 14:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:22.297 14:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:22.297 14:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.297 14:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.297 14:23:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:22.297 [2024-10-07 14:23:45.978583] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.558 14:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:22.558 14:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:22.558 14:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:22.819 14:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:22.819 14:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:23.079 14:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:23.079 14:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:23.340 14:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:23.340 14:23:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:23.601 14:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:23.862 14:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:23.862 14:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:24.123 14:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:24.123 14:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:24.385 14:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:24.385 14:23:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:24.385 14:23:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:24.646 14:23:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:24.646 14:23:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:24.907 14:23:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:24.907 14:23:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:24.907 14:23:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.168 [2024-10-07 14:23:48.735512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.168 14:23:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:25.429 14:23:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:25.429 14:23:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.485 14:23:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:27.485 14:23:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.485 14:23:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.485 14:23:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:27.485 14:23:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:27.485 14:23:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:29.411 14:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:29.411 14:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:29.411 14:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:29.411 14:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:29.411 14:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:29.411 14:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:29.411 14:23:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:29.411 [global] 00:13:29.411 thread=1 00:13:29.411 invalidate=1 00:13:29.411 rw=write 00:13:29.411 time_based=1 00:13:29.411 runtime=1 00:13:29.411 ioengine=libaio 00:13:29.411 direct=1 00:13:29.411 bs=4096 00:13:29.411 iodepth=1 00:13:29.411 norandommap=0 00:13:29.411 numjobs=1 00:13:29.411 00:13:29.411 verify_dump=1 00:13:29.411 verify_backlog=512 00:13:29.411 verify_state_save=0 00:13:29.411 do_verify=1 00:13:29.411 verify=crc32c-intel 00:13:29.411 [job0] 00:13:29.411 filename=/dev/nvme0n1 00:13:29.411 [job1] 00:13:29.411 filename=/dev/nvme0n2 00:13:29.411 [job2] 00:13:29.411 filename=/dev/nvme0n3 00:13:29.411 [job3] 00:13:29.411 filename=/dev/nvme0n4 00:13:29.411 Could not set queue depth (nvme0n1) 00:13:29.411 Could not set queue depth (nvme0n2) 00:13:29.411 Could not set queue depth (nvme0n3) 00:13:29.411 Could not set queue depth (nvme0n4) 00:13:29.672 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.672 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.672 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.672 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.672 fio-3.35 00:13:29.672 Starting 4 threads 00:13:31.101 00:13:31.101 job0: (groupid=0, jobs=1): err= 0: pid=2877123: Mon Oct 7 14:23:54 2024 00:13:31.101 read: IOPS=204, BW=817KiB/s (837kB/s)(832KiB/1018msec) 00:13:31.101 slat (nsec): min=6729, max=44855, avg=23207.05, stdev=7500.45 00:13:31.101 clat (usec): min=255, max=42555, avg=3944.17, stdev=11321.65 00:13:31.101 lat (usec): min=263, max=42563, avg=3967.38, stdev=11321.92 00:13:31.101 clat percentiles (usec): 00:13:31.101 | 1.00th=[ 273], 5.00th=[ 424], 10.00th=[ 474], 20.00th=[ 523], 00:13:31.101 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 603], 60.00th=[ 619], 00:13:31.101 | 70.00th=[ 635], 80.00th=[ 652], 90.00th=[ 701], 95.00th=[41681], 00:13:31.101 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:13:31.101 | 99.99th=[42730] 00:13:31.101 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:13:31.101 slat (nsec): min=9473, max=67169, avg=28226.75, stdev=9785.21 00:13:31.101 clat (usec): min=136, max=1166, avg=338.48, stdev=91.07 00:13:31.101 lat (usec): min=169, max=1176, avg=366.70, stdev=90.70 00:13:31.101 clat percentiles (usec): 00:13:31.101 | 1.00th=[ 153], 5.00th=[ 196], 10.00th=[ 249], 20.00th=[ 273], 00:13:31.101 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 326], 60.00th=[ 351], 00:13:31.101 | 70.00th=[ 375], 80.00th=[ 404], 90.00th=[ 453], 95.00th=[ 494], 00:13:31.101 | 99.00th=[ 537], 99.50th=[ 562], 99.90th=[ 1172], 99.95th=[ 1172], 00:13:31.101 | 99.99th=[ 1172] 00:13:31.101 bw ( KiB/s): min= 4096, max= 4096, per=31.56%, avg=4096.00, stdev= 0.00, samples=1 00:13:31.101 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:31.101 lat (usec) : 250=7.36%, 500=65.56%, 750=24.44%, 1000=0.14% 00:13:31.101 lat (msec) : 2=0.14%, 50=2.36% 00:13:31.101 cpu : usr=1.08%, sys=1.77%, ctx=720, majf=0, minf=1 00:13:31.101 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.101 issued rwts: total=208,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.101 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:31.101 job1: (groupid=0, jobs=1): err= 0: pid=2877154: Mon Oct 7 14:23:54 2024 00:13:31.101 read: IOPS=710, BW=2841KiB/s (2909kB/s)(2892KiB/1018msec) 00:13:31.101 slat (nsec): min=6680, max=46960, avg=22078.13, stdev=7947.38 00:13:31.101 clat (usec): min=181, max=42551, avg=802.79, stdev=2675.80 00:13:31.101 lat (usec): min=189, max=42580, avg=824.87, stdev=2676.20 00:13:31.101 clat percentiles (usec): 00:13:31.101 | 1.00th=[ 351], 5.00th=[ 441], 10.00th=[ 474], 20.00th=[ 510], 00:13:31.102 | 30.00th=[ 545], 40.00th=[ 562], 50.00th=[ 578], 60.00th=[ 611], 00:13:31.102 | 70.00th=[ 652], 80.00th=[ 742], 90.00th=[ 832], 95.00th=[ 873], 00:13:31.102 | 99.00th=[ 938], 99.50th=[13173], 99.90th=[42730], 99.95th=[42730], 00:13:31.102 | 99.99th=[42730] 00:13:31.102 write: IOPS=1005, BW=4024KiB/s (4120kB/s)(4096KiB/1018msec); 0 zone resets 00:13:31.102 slat (nsec): min=9474, max=62166, avg=27860.56, stdev=9980.33 00:13:31.102 clat (usec): min=105, max=684, avg=371.77, stdev=93.88 00:13:31.102 lat (usec): min=138, max=728, avg=399.63, stdev=95.88 00:13:31.102 clat percentiles (usec): 00:13:31.102 | 1.00th=[ 141], 5.00th=[ 223], 10.00th=[ 243], 20.00th=[ 285], 00:13:31.102 | 30.00th=[ 326], 40.00th=[ 351], 50.00th=[ 367], 60.00th=[ 408], 00:13:31.102 | 70.00th=[ 441], 80.00th=[ 461], 90.00th=[ 486], 95.00th=[ 502], 00:13:31.102 | 99.00th=[ 537], 99.50th=[ 586], 99.90th=[ 660], 99.95th=[ 685], 00:13:31.102 | 99.99th=[ 685] 00:13:31.102 bw ( KiB/s): min= 4096, max= 4096, per=31.56%, avg=4096.00, stdev= 0.00, samples=2 00:13:31.102 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:13:31.102 lat (usec) : 250=7.04%, 500=55.47%, 750=29.48%, 1000=7.73% 00:13:31.102 lat (msec) : 4=0.06%, 20=0.06%, 50=0.17% 00:13:31.102 cpu : usr=2.95%, sys=3.93%, ctx=1747, majf=0, minf=1 00:13:31.102 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.102 issued rwts: total=723,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.102 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:31.102 job2: (groupid=0, jobs=1): err= 0: pid=2877181: Mon Oct 7 14:23:54 2024 00:13:31.102 read: IOPS=604, BW=2418KiB/s (2476kB/s)(2420KiB/1001msec) 00:13:31.102 slat (nsec): min=6921, max=45316, avg=24263.95, stdev=7860.17 00:13:31.102 clat (usec): min=387, max=41412, avg=858.30, stdev=1656.12 00:13:31.102 lat (usec): min=407, max=41438, avg=882.57, stdev=1656.29 00:13:31.102 clat percentiles (usec): 00:13:31.102 | 1.00th=[ 441], 5.00th=[ 519], 10.00th=[ 586], 20.00th=[ 717], 00:13:31.102 | 30.00th=[ 775], 40.00th=[ 807], 50.00th=[ 824], 60.00th=[ 840], 00:13:31.102 | 70.00th=[ 857], 80.00th=[ 873], 90.00th=[ 898], 95.00th=[ 922], 00:13:31.102 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[41157], 99.95th=[41157], 00:13:31.102 | 99.99th=[41157] 00:13:31.102 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:13:31.102 slat (nsec): min=10241, max=68962, avg=30300.86, stdev=10674.88 00:13:31.102 clat (usec): min=130, max=785, avg=413.96, stdev=92.67 00:13:31.102 lat (usec): min=166, max=820, avg=444.26, stdev=96.10 00:13:31.102 clat percentiles (usec): 00:13:31.102 | 1.00th=[ 194], 5.00th=[ 265], 10.00th=[ 281], 20.00th=[ 334], 00:13:31.102 | 30.00th=[ 363], 40.00th=[ 400], 50.00th=[ 433], 60.00th=[ 449], 00:13:31.102 | 70.00th=[ 465], 80.00th=[ 486], 90.00th=[ 510], 95.00th=[ 537], 00:13:31.102 | 99.00th=[ 685], 99.50th=[ 709], 99.90th=[ 758], 99.95th=[ 783], 00:13:31.102 | 99.99th=[ 783] 00:13:31.102 bw ( KiB/s): min= 4096, max= 4096, per=31.56%, avg=4096.00, stdev= 0.00, samples=1 00:13:31.102 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:31.102 lat (usec) : 250=2.15%, 500=53.35%, 750=16.88%, 1000=26.58% 00:13:31.102 lat (msec) : 2=0.98%, 50=0.06% 00:13:31.102 cpu : usr=2.50%, sys=4.50%, ctx=1631, majf=0, minf=1 00:13:31.102 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.102 issued rwts: total=605,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.102 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:31.102 job3: (groupid=0, jobs=1): err= 0: pid=2877182: Mon Oct 7 14:23:54 2024 00:13:31.102 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:13:31.102 slat (nsec): min=7558, max=44352, avg=25779.27, stdev=4178.11 00:13:31.102 clat (usec): min=491, max=41440, avg=965.22, stdev=1797.79 00:13:31.102 lat (usec): min=534, max=41450, avg=990.99, stdev=1797.14 00:13:31.102 clat percentiles (usec): 00:13:31.102 | 1.00th=[ 570], 5.00th=[ 619], 10.00th=[ 676], 20.00th=[ 750], 00:13:31.102 | 30.00th=[ 832], 40.00th=[ 873], 50.00th=[ 922], 60.00th=[ 947], 00:13:31.102 | 70.00th=[ 971], 80.00th=[ 996], 90.00th=[ 1037], 95.00th=[ 1090], 00:13:31.102 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[41681], 99.95th=[41681], 00:13:31.102 | 99.99th=[41681] 00:13:31.102 write: IOPS=742, BW=2969KiB/s (3040kB/s)(2972KiB/1001msec); 0 zone resets 00:13:31.102 slat (nsec): min=9907, max=65058, avg=31591.33, stdev=8509.71 00:13:31.102 clat (usec): min=207, max=1036, avg=618.21, stdev=143.49 00:13:31.102 lat (usec): min=217, max=1088, avg=649.80, stdev=146.54 00:13:31.102 clat percentiles (usec): 00:13:31.102 | 1.00th=[ 297], 5.00th=[ 392], 10.00th=[ 433], 20.00th=[ 486], 00:13:31.102 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 619], 60.00th=[ 660], 00:13:31.102 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 807], 95.00th=[ 857], 00:13:31.102 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[ 1037], 99.95th=[ 1037], 00:13:31.102 | 99.99th=[ 1037] 00:13:31.102 bw ( KiB/s): min= 4096, max= 4096, per=31.56%, avg=4096.00, stdev= 0.00, samples=1 00:13:31.102 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:31.102 lat (usec) : 250=0.08%, 500=13.23%, 750=43.51%, 1000=35.06% 00:13:31.102 lat (msec) : 2=8.05%, 50=0.08% 00:13:31.102 cpu : usr=2.20%, sys=3.50%, ctx=1255, majf=0, minf=1 00:13:31.102 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:31.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:31.102 issued rwts: total=512,743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:31.102 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:31.102 00:13:31.102 Run status group 0 (all jobs): 00:13:31.102 READ: bw=8047KiB/s (8240kB/s), 817KiB/s-2841KiB/s (837kB/s-2909kB/s), io=8192KiB (8389kB), run=1001-1018msec 00:13:31.102 WRITE: bw=12.7MiB/s (13.3MB/s), 2012KiB/s-4092KiB/s (2060kB/s-4190kB/s), io=12.9MiB (13.5MB), run=1001-1018msec 00:13:31.102 00:13:31.102 Disk stats (read/write): 00:13:31.102 nvme0n1: ios=242/512, merge=0/0, ticks=630/170, in_queue=800, util=83.17% 00:13:31.102 nvme0n2: ios=628/1024, merge=0/0, ticks=590/363, in_queue=953, util=87.06% 00:13:31.102 nvme0n3: ios=534/669, merge=0/0, ticks=1324/286, in_queue=1610, util=96.03% 00:13:31.102 nvme0n4: ios=423/512, merge=0/0, ticks=401/307, in_queue=708, util=88.93% 00:13:31.102 14:23:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:31.102 [global] 00:13:31.102 thread=1 00:13:31.102 invalidate=1 00:13:31.102 rw=randwrite 00:13:31.102 time_based=1 00:13:31.102 runtime=1 00:13:31.102 ioengine=libaio 00:13:31.102 direct=1 00:13:31.102 bs=4096 00:13:31.102 iodepth=1 00:13:31.102 norandommap=0 00:13:31.102 numjobs=1 00:13:31.102 00:13:31.102 verify_dump=1 00:13:31.102 verify_backlog=512 00:13:31.102 verify_state_save=0 00:13:31.102 do_verify=1 00:13:31.102 verify=crc32c-intel 00:13:31.102 [job0] 00:13:31.102 filename=/dev/nvme0n1 00:13:31.102 [job1] 00:13:31.102 filename=/dev/nvme0n2 00:13:31.102 [job2] 00:13:31.102 filename=/dev/nvme0n3 00:13:31.102 [job3] 00:13:31.102 filename=/dev/nvme0n4 00:13:31.102 Could not set queue depth (nvme0n1) 00:13:31.102 Could not set queue depth (nvme0n2) 00:13:31.102 Could not set queue depth (nvme0n3) 00:13:31.102 Could not set queue depth (nvme0n4) 00:13:31.369 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:31.369 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:31.369 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:31.369 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:31.369 fio-3.35 00:13:31.369 Starting 4 threads 00:13:32.777 00:13:32.777 job0: (groupid=0, jobs=1): err= 0: pid=2877649: Mon Oct 7 14:23:56 2024 00:13:32.777 read: IOPS=19, BW=78.4KiB/s (80.3kB/s)(80.0KiB/1020msec) 00:13:32.777 slat (nsec): min=25374, max=26072, avg=25676.60, stdev=206.00 00:13:32.777 clat (usec): min=612, max=41276, avg=36965.48, stdev=12397.30 00:13:32.777 lat (usec): min=638, max=41302, avg=36991.16, stdev=12397.26 00:13:32.777 clat percentiles (usec): 00:13:32.777 | 1.00th=[ 611], 5.00th=[ 611], 10.00th=[ 824], 20.00th=[40633], 00:13:32.777 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:32.777 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:32.777 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:32.777 | 99.99th=[41157] 00:13:32.777 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:13:32.777 slat (nsec): min=9213, max=53887, avg=29855.32, stdev=7972.28 00:13:32.777 clat (usec): min=214, max=976, avg=507.19, stdev=113.81 00:13:32.777 lat (usec): min=224, max=1009, avg=537.05, stdev=116.38 00:13:32.777 clat percentiles (usec): 00:13:32.777 | 1.00th=[ 277], 5.00th=[ 330], 10.00th=[ 367], 20.00th=[ 412], 00:13:32.777 | 30.00th=[ 445], 40.00th=[ 474], 50.00th=[ 494], 60.00th=[ 537], 00:13:32.777 | 70.00th=[ 562], 80.00th=[ 603], 90.00th=[ 652], 95.00th=[ 709], 00:13:32.777 | 99.00th=[ 791], 99.50th=[ 873], 99.90th=[ 979], 99.95th=[ 979], 00:13:32.777 | 99.99th=[ 979] 00:13:32.777 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:13:32.777 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:32.777 lat (usec) : 250=0.38%, 500=48.68%, 750=45.11%, 1000=2.44% 00:13:32.777 lat (msec) : 50=3.38% 00:13:32.777 cpu : usr=0.88%, sys=1.47%, ctx=534, majf=0, minf=1 00:13:32.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:32.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.777 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:32.777 job1: (groupid=0, jobs=1): err= 0: pid=2877663: Mon Oct 7 14:23:56 2024 00:13:32.777 read: IOPS=16, BW=67.5KiB/s (69.1kB/s)(68.0KiB/1008msec) 00:13:32.777 slat (nsec): min=25020, max=25954, avg=25445.94, stdev=242.19 00:13:32.777 clat (usec): min=1017, max=42019, avg=39292.24, stdev=9871.66 00:13:32.777 lat (usec): min=1043, max=42045, avg=39317.69, stdev=9871.67 00:13:32.777 clat percentiles (usec): 00:13:32.777 | 1.00th=[ 1020], 5.00th=[ 1020], 10.00th=[41157], 20.00th=[41157], 00:13:32.777 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:13:32.777 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:32.777 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:32.777 | 99.99th=[42206] 00:13:32.777 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:13:32.777 slat (nsec): min=9477, max=62720, avg=29196.86, stdev=8276.53 00:13:32.777 clat (usec): min=263, max=1024, avg=624.83, stdev=129.99 00:13:32.777 lat (usec): min=276, max=1056, avg=654.02, stdev=132.88 00:13:32.777 clat percentiles (usec): 00:13:32.777 | 1.00th=[ 297], 5.00th=[ 400], 10.00th=[ 457], 20.00th=[ 519], 00:13:32.777 | 30.00th=[ 553], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:13:32.777 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 791], 95.00th=[ 832], 00:13:32.777 | 99.00th=[ 906], 99.50th=[ 938], 99.90th=[ 1029], 99.95th=[ 1029], 00:13:32.777 | 99.99th=[ 1029] 00:13:32.777 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:13:32.777 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:32.777 lat (usec) : 500=16.26%, 750=64.46%, 1000=15.88% 00:13:32.777 lat (msec) : 2=0.38%, 50=3.02% 00:13:32.777 cpu : usr=0.99%, sys=1.29%, ctx=529, majf=0, minf=2 00:13:32.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:32.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.777 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:32.777 job2: (groupid=0, jobs=1): err= 0: pid=2877682: Mon Oct 7 14:23:56 2024 00:13:32.777 read: IOPS=17, BW=70.7KiB/s (72.4kB/s)(72.0KiB/1019msec) 00:13:32.777 slat (nsec): min=26442, max=30046, avg=26906.61, stdev=805.09 00:13:32.777 clat (usec): min=1065, max=42036, avg=37218.18, stdev=13155.05 00:13:32.777 lat (usec): min=1095, max=42063, avg=37245.09, stdev=13154.52 00:13:32.777 clat percentiles (usec): 00:13:32.777 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[ 1090], 20.00th=[41157], 00:13:32.777 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:13:32.777 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:32.777 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:32.777 | 99.99th=[42206] 00:13:32.777 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:13:32.777 slat (nsec): min=9933, max=70038, avg=34172.89, stdev=8661.58 00:13:32.777 clat (usec): min=242, max=1262, avg=636.15, stdev=113.64 00:13:32.777 lat (usec): min=253, max=1296, avg=670.32, stdev=115.93 00:13:32.777 clat percentiles (usec): 00:13:32.777 | 1.00th=[ 351], 5.00th=[ 457], 10.00th=[ 498], 20.00th=[ 562], 00:13:32.777 | 30.00th=[ 594], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 660], 00:13:32.777 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 807], 00:13:32.777 | 99.00th=[ 971], 99.50th=[ 996], 99.90th=[ 1270], 99.95th=[ 1270], 00:13:32.777 | 99.99th=[ 1270] 00:13:32.777 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:13:32.777 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:32.777 lat (usec) : 250=0.19%, 500=9.62%, 750=75.09%, 1000=11.32% 00:13:32.777 lat (msec) : 2=0.75%, 50=3.02% 00:13:32.777 cpu : usr=0.88%, sys=1.77%, ctx=531, majf=0, minf=1 00:13:32.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:32.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.777 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.777 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:32.777 job3: (groupid=0, jobs=1): err= 0: pid=2877689: Mon Oct 7 14:23:56 2024 00:13:32.777 read: IOPS=16, BW=65.3KiB/s (66.9kB/s)(68.0KiB/1041msec) 00:13:32.777 slat (nsec): min=9859, max=26155, avg=25015.41, stdev=3906.92 00:13:32.777 clat (usec): min=41684, max=42034, avg=41952.12, stdev=71.41 00:13:32.777 lat (usec): min=41694, max=42060, avg=41977.14, stdev=75.17 00:13:32.777 clat percentiles (usec): 00:13:32.777 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:13:32.777 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:13:32.777 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:32.777 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:32.777 | 99.99th=[42206] 00:13:32.777 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:13:32.777 slat (nsec): min=9252, max=52522, avg=28014.03, stdev=10007.09 00:13:32.777 clat (usec): min=208, max=936, avg=602.91, stdev=140.59 00:13:32.777 lat (usec): min=231, max=968, avg=630.92, stdev=145.51 00:13:32.777 clat percentiles (usec): 00:13:32.777 | 1.00th=[ 227], 5.00th=[ 347], 10.00th=[ 408], 20.00th=[ 486], 00:13:32.777 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 652], 00:13:32.777 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 766], 95.00th=[ 807], 00:13:32.777 | 99.00th=[ 889], 99.50th=[ 906], 99.90th=[ 938], 99.95th=[ 938], 00:13:32.777 | 99.99th=[ 938] 00:13:32.777 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:13:32.777 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:32.777 lat (usec) : 250=2.27%, 500=19.28%, 750=62.00%, 1000=13.23% 00:13:32.777 lat (msec) : 50=3.21% 00:13:32.777 cpu : usr=0.77%, sys=1.35%, ctx=529, majf=0, minf=1 00:13:32.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:32.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.778 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:32.778 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:32.778 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:32.778 00:13:32.778 Run status group 0 (all jobs): 00:13:32.778 READ: bw=277KiB/s (283kB/s), 65.3KiB/s-78.4KiB/s (66.9kB/s-80.3kB/s), io=288KiB (295kB), run=1008-1041msec 00:13:32.778 WRITE: bw=7869KiB/s (8058kB/s), 1967KiB/s-2032KiB/s (2015kB/s-2081kB/s), io=8192KiB (8389kB), run=1008-1041msec 00:13:32.778 00:13:32.778 Disk stats (read/write): 00:13:32.778 nvme0n1: ios=65/512, merge=0/0, ticks=717/233, in_queue=950, util=99.60% 00:13:32.778 nvme0n2: ios=52/512, merge=0/0, ticks=521/305, in_queue=826, util=88.38% 00:13:32.778 nvme0n3: ios=60/512, merge=0/0, ticks=653/307, in_queue=960, util=99.05% 00:13:32.778 nvme0n4: ios=12/512, merge=0/0, ticks=504/298, in_queue=802, util=89.52% 00:13:32.778 14:23:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:32.778 [global] 00:13:32.778 thread=1 00:13:32.778 invalidate=1 00:13:32.778 rw=write 00:13:32.778 time_based=1 00:13:32.778 runtime=1 00:13:32.778 ioengine=libaio 00:13:32.778 direct=1 00:13:32.778 bs=4096 00:13:32.778 iodepth=128 00:13:32.778 norandommap=0 00:13:32.778 numjobs=1 00:13:32.778 00:13:32.778 verify_dump=1 00:13:32.778 verify_backlog=512 00:13:32.778 verify_state_save=0 00:13:32.778 do_verify=1 00:13:32.778 verify=crc32c-intel 00:13:32.778 [job0] 00:13:32.778 filename=/dev/nvme0n1 00:13:32.778 [job1] 00:13:32.778 filename=/dev/nvme0n2 00:13:32.778 [job2] 00:13:32.778 filename=/dev/nvme0n3 00:13:32.778 [job3] 00:13:32.778 filename=/dev/nvme0n4 00:13:32.778 Could not set queue depth (nvme0n1) 00:13:32.778 Could not set queue depth (nvme0n2) 00:13:32.778 Could not set queue depth (nvme0n3) 00:13:32.778 Could not set queue depth (nvme0n4) 00:13:33.039 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:33.039 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:33.039 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:33.039 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:33.039 fio-3.35 00:13:33.039 Starting 4 threads 00:13:34.452 00:13:34.452 job0: (groupid=0, jobs=1): err= 0: pid=2878150: Mon Oct 7 14:23:57 2024 00:13:34.452 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec) 00:13:34.452 slat (nsec): min=883, max=15356k, avg=84148.05, stdev=489659.42 00:13:34.452 clat (usec): min=5072, max=51899, avg=10956.73, stdev=7002.64 00:13:34.452 lat (usec): min=5931, max=51902, avg=11040.87, stdev=7031.10 00:13:34.452 clat percentiles (usec): 00:13:34.452 | 1.00th=[ 6456], 5.00th=[ 7308], 10.00th=[ 8029], 20.00th=[ 8586], 00:13:34.452 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765], 00:13:34.452 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11076], 95.00th=[22938], 00:13:34.452 | 99.00th=[50070], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:13:34.452 | 99.99th=[51643] 00:13:34.452 write: IOPS=6576, BW=25.7MiB/s (26.9MB/s)(25.9MiB/1008msec); 0 zone resets 00:13:34.452 slat (nsec): min=1526, max=8393.1k, avg=70689.81, stdev=360782.40 00:13:34.452 clat (usec): min=3036, max=30382, avg=9097.96, stdev=3760.71 00:13:34.452 lat (usec): min=3044, max=30390, avg=9168.65, stdev=3777.56 00:13:34.452 clat percentiles (usec): 00:13:34.452 | 1.00th=[ 4015], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7242], 00:13:34.452 | 30.00th=[ 7308], 40.00th=[ 7767], 50.00th=[ 8455], 60.00th=[ 8586], 00:13:34.452 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[11207], 95.00th=[15926], 00:13:34.452 | 99.00th=[26084], 99.50th=[26608], 99.90th=[29754], 99.95th=[30278], 00:13:34.452 | 99.99th=[30278] 00:13:34.452 bw ( KiB/s): min=24944, max=27064, per=28.29%, avg=26004.00, stdev=1499.07, samples=2 00:13:34.452 iops : min= 6236, max= 6766, avg=6501.00, stdev=374.77, samples=2 00:13:34.452 lat (msec) : 4=0.52%, 10=77.31%, 20=17.45%, 50=4.25%, 100=0.46% 00:13:34.452 cpu : usr=2.38%, sys=3.67%, ctx=858, majf=0, minf=1 00:13:34.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:34.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:34.452 issued rwts: total=6144,6629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:34.452 job1: (groupid=0, jobs=1): err= 0: pid=2878165: Mon Oct 7 14:23:57 2024 00:13:34.452 read: IOPS=5692, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1002msec) 00:13:34.452 slat (nsec): min=905, max=9727.4k, avg=75063.49, stdev=547622.45 00:13:34.452 clat (usec): min=1150, max=28334, avg=9853.10, stdev=3475.32 00:13:34.452 lat (usec): min=1746, max=28363, avg=9928.17, stdev=3514.00 00:13:34.452 clat percentiles (usec): 00:13:34.452 | 1.00th=[ 4228], 5.00th=[ 6128], 10.00th=[ 6652], 20.00th=[ 7308], 00:13:34.452 | 30.00th=[ 7701], 40.00th=[ 7963], 50.00th=[ 8979], 60.00th=[ 9765], 00:13:34.452 | 70.00th=[10814], 80.00th=[12649], 90.00th=[14222], 95.00th=[16712], 00:13:34.452 | 99.00th=[20579], 99.50th=[23200], 99.90th=[23200], 99.95th=[23200], 00:13:34.452 | 99.99th=[28443] 00:13:34.452 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:13:34.453 slat (nsec): min=1593, max=8728.9k, avg=87192.12, stdev=528337.80 00:13:34.453 clat (usec): min=1178, max=69054, avg=11543.56, stdev=10103.82 00:13:34.453 lat (usec): min=1190, max=69064, avg=11630.75, stdev=10170.73 00:13:34.453 clat percentiles (usec): 00:13:34.453 | 1.00th=[ 3720], 5.00th=[ 4621], 10.00th=[ 5080], 20.00th=[ 6063], 00:13:34.453 | 30.00th=[ 6783], 40.00th=[ 7111], 50.00th=[ 7701], 60.00th=[ 9634], 00:13:34.453 | 70.00th=[12518], 80.00th=[14353], 90.00th=[19268], 95.00th=[28967], 00:13:34.453 | 99.00th=[63701], 99.50th=[66847], 99.90th=[68682], 99.95th=[68682], 00:13:34.453 | 99.99th=[68682] 00:13:34.453 bw ( KiB/s): min=24152, max=24560, per=26.50%, avg=24356.00, stdev=288.50, samples=2 00:13:34.453 iops : min= 6038, max= 6140, avg=6089.00, stdev=72.12, samples=2 00:13:34.453 lat (msec) : 2=0.23%, 4=1.33%, 10=59.88%, 20=32.77%, 50=4.65% 00:13:34.453 lat (msec) : 100=1.14% 00:13:34.453 cpu : usr=5.39%, sys=5.89%, ctx=431, majf=0, minf=1 00:13:34.453 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:34.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:34.453 issued rwts: total=5704,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.453 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:34.453 job2: (groupid=0, jobs=1): err= 0: pid=2878184: Mon Oct 7 14:23:57 2024 00:13:34.453 read: IOPS=3556, BW=13.9MiB/s (14.6MB/s)(14.1MiB/1014msec) 00:13:34.453 slat (nsec): min=930, max=14982k, avg=124622.86, stdev=906873.46 00:13:34.453 clat (usec): min=3269, max=47006, avg=14416.72, stdev=8770.83 00:13:34.453 lat (usec): min=3270, max=47015, avg=14541.34, stdev=8830.69 00:13:34.453 clat percentiles (usec): 00:13:34.453 | 1.00th=[ 4228], 5.00th=[ 6980], 10.00th=[ 7767], 20.00th=[ 8586], 00:13:34.453 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[10945], 60.00th=[12911], 00:13:34.453 | 70.00th=[16450], 80.00th=[19006], 90.00th=[26608], 95.00th=[35914], 00:13:34.453 | 99.00th=[44303], 99.50th=[45351], 99.90th=[46924], 99.95th=[46924], 00:13:34.453 | 99.99th=[46924] 00:13:34.453 write: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec); 0 zone resets 00:13:34.453 slat (nsec): min=1579, max=25954k, avg=130127.88, stdev=756454.78 00:13:34.453 clat (usec): min=1186, max=73284, avg=18719.11, stdev=16046.45 00:13:34.453 lat (usec): min=1196, max=73292, avg=18849.24, stdev=16149.54 00:13:34.453 clat percentiles (usec): 00:13:34.453 | 1.00th=[ 3163], 5.00th=[ 5145], 10.00th=[ 6587], 20.00th=[ 8029], 00:13:34.453 | 30.00th=[10028], 40.00th=[12387], 50.00th=[13435], 60.00th=[14484], 00:13:34.453 | 70.00th=[15401], 80.00th=[27132], 90.00th=[43254], 95.00th=[63701], 00:13:34.453 | 99.00th=[70779], 99.50th=[71828], 99.90th=[72877], 99.95th=[72877], 00:13:34.453 | 99.99th=[72877] 00:13:34.453 bw ( KiB/s): min=12096, max=19824, per=17.36%, avg=15960.00, stdev=5464.52, samples=2 00:13:34.453 iops : min= 3024, max= 4956, avg=3990.00, stdev=1366.13, samples=2 00:13:34.453 lat (msec) : 2=0.03%, 4=1.88%, 10=35.48%, 20=41.18%, 50=17.00% 00:13:34.453 lat (msec) : 100=4.43% 00:13:34.453 cpu : usr=2.86%, sys=4.15%, ctx=395, majf=0, minf=2 00:13:34.453 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:34.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:34.453 issued rwts: total=3606,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.453 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:34.453 job3: (groupid=0, jobs=1): err= 0: pid=2878191: Mon Oct 7 14:23:57 2024 00:13:34.453 read: IOPS=6089, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1009msec) 00:13:34.453 slat (nsec): min=906, max=9605.1k, avg=83050.07, stdev=605527.26 00:13:34.453 clat (usec): min=3842, max=36504, avg=10721.54, stdev=2994.60 00:13:34.453 lat (usec): min=3881, max=40550, avg=10804.59, stdev=3039.13 00:13:34.453 clat percentiles (usec): 00:13:34.453 | 1.00th=[ 4817], 5.00th=[ 6652], 10.00th=[ 7439], 20.00th=[ 8717], 00:13:34.453 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:13:34.453 | 70.00th=[10945], 80.00th=[11600], 90.00th=[13960], 95.00th=[15795], 00:13:34.453 | 99.00th=[19530], 99.50th=[21103], 99.90th=[36439], 99.95th=[36439], 00:13:34.453 | 99.99th=[36439] 00:13:34.453 write: IOPS=6375, BW=24.9MiB/s (26.1MB/s)(25.1MiB/1009msec); 0 zone resets 00:13:34.453 slat (nsec): min=1580, max=12949k, avg=63001.68, stdev=472386.23 00:13:34.453 clat (usec): min=917, max=30478, avg=9665.70, stdev=4265.91 00:13:34.453 lat (usec): min=925, max=31629, avg=9728.71, stdev=4298.44 00:13:34.453 clat percentiles (usec): 00:13:34.453 | 1.00th=[ 1713], 5.00th=[ 3458], 10.00th=[ 4621], 20.00th=[ 5932], 00:13:34.453 | 30.00th=[ 7046], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[10945], 00:13:34.453 | 70.00th=[11207], 80.00th=[11469], 90.00th=[13435], 95.00th=[15533], 00:13:34.453 | 99.00th=[24511], 99.50th=[26608], 99.90th=[26870], 99.95th=[26870], 00:13:34.453 | 99.99th=[30540] 00:13:34.453 bw ( KiB/s): min=24576, max=25864, per=27.44%, avg=25220.00, stdev=910.75, samples=2 00:13:34.453 iops : min= 6144, max= 6466, avg=6305.00, stdev=227.69, samples=2 00:13:34.453 lat (usec) : 1000=0.02% 00:13:34.453 lat (msec) : 2=0.64%, 4=2.72%, 10=33.98%, 20=60.11%, 50=2.53% 00:13:34.453 cpu : usr=3.97%, sys=7.14%, ctx=576, majf=0, minf=1 00:13:34.453 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:34.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:34.453 issued rwts: total=6144,6433,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.453 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:34.453 00:13:34.453 Run status group 0 (all jobs): 00:13:34.453 READ: bw=83.2MiB/s (87.2MB/s), 13.9MiB/s-23.8MiB/s (14.6MB/s-25.0MB/s), io=84.4MiB (88.5MB), run=1002-1014msec 00:13:34.453 WRITE: bw=89.8MiB/s (94.1MB/s), 15.8MiB/s-25.7MiB/s (16.5MB/s-26.9MB/s), io=91.0MiB (95.4MB), run=1002-1014msec 00:13:34.453 00:13:34.453 Disk stats (read/write): 00:13:34.453 nvme0n1: ios=5170/5417, merge=0/0, ticks=14412/11825, in_queue=26237, util=86.77% 00:13:34.453 nvme0n2: ios=5159/5327, merge=0/0, ticks=48161/50492, in_queue=98653, util=96.23% 00:13:34.453 nvme0n3: ios=2917/3072, merge=0/0, ticks=43724/57521, in_queue=101245, util=88.38% 00:13:34.453 nvme0n4: ios=5141/5487, merge=0/0, ticks=47555/48391, in_queue=95946, util=91.34% 00:13:34.453 14:23:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:34.453 [global] 00:13:34.453 thread=1 00:13:34.453 invalidate=1 00:13:34.453 rw=randwrite 00:13:34.453 time_based=1 00:13:34.453 runtime=1 00:13:34.453 ioengine=libaio 00:13:34.453 direct=1 00:13:34.453 bs=4096 00:13:34.453 iodepth=128 00:13:34.453 norandommap=0 00:13:34.453 numjobs=1 00:13:34.453 00:13:34.453 verify_dump=1 00:13:34.453 verify_backlog=512 00:13:34.453 verify_state_save=0 00:13:34.453 do_verify=1 00:13:34.453 verify=crc32c-intel 00:13:34.453 [job0] 00:13:34.453 filename=/dev/nvme0n1 00:13:34.453 [job1] 00:13:34.453 filename=/dev/nvme0n2 00:13:34.453 [job2] 00:13:34.453 filename=/dev/nvme0n3 00:13:34.453 [job3] 00:13:34.453 filename=/dev/nvme0n4 00:13:34.453 Could not set queue depth (nvme0n1) 00:13:34.453 Could not set queue depth (nvme0n2) 00:13:34.453 Could not set queue depth (nvme0n3) 00:13:34.453 Could not set queue depth (nvme0n4) 00:13:34.717 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:34.717 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:34.717 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:34.717 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:34.717 fio-3.35 00:13:34.717 Starting 4 threads 00:13:36.125 00:13:36.125 job0: (groupid=0, jobs=1): err= 0: pid=2878638: Mon Oct 7 14:23:59 2024 00:13:36.125 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:13:36.125 slat (nsec): min=896, max=16029k, avg=143600.71, stdev=927692.69 00:13:36.125 clat (usec): min=2014, max=63660, avg=19099.13, stdev=14710.42 00:13:36.125 lat (usec): min=2026, max=63690, avg=19242.73, stdev=14824.91 00:13:36.125 clat percentiles (usec): 00:13:36.125 | 1.00th=[ 4883], 5.00th=[ 6325], 10.00th=[ 7635], 20.00th=[ 8094], 00:13:36.125 | 30.00th=[ 9896], 40.00th=[11600], 50.00th=[11994], 60.00th=[12518], 00:13:36.125 | 70.00th=[19792], 80.00th=[35914], 90.00th=[45876], 95.00th=[50594], 00:13:36.125 | 99.00th=[54789], 99.50th=[56361], 99.90th=[60556], 99.95th=[62129], 00:13:36.125 | 99.99th=[63701] 00:13:36.125 write: IOPS=3575, BW=14.0MiB/s (14.6MB/s)(14.1MiB/1007msec); 0 zone resets 00:13:36.125 slat (nsec): min=1521, max=13984k, avg=128174.02, stdev=810960.55 00:13:36.125 clat (usec): min=651, max=43605, avg=16411.36, stdev=9553.71 00:13:36.125 lat (usec): min=883, max=43627, avg=16539.54, stdev=9632.92 00:13:36.125 clat percentiles (usec): 00:13:36.125 | 1.00th=[ 1614], 5.00th=[ 4146], 10.00th=[ 5145], 20.00th=[ 7635], 00:13:36.125 | 30.00th=[ 9765], 40.00th=[12911], 50.00th=[13304], 60.00th=[15664], 00:13:36.125 | 70.00th=[21103], 80.00th=[27395], 90.00th=[31065], 95.00th=[33424], 00:13:36.125 | 99.00th=[37487], 99.50th=[37487], 99.90th=[41681], 99.95th=[43254], 00:13:36.125 | 99.99th=[43779] 00:13:36.125 bw ( KiB/s): min=12288, max=16384, per=17.33%, avg=14336.00, stdev=2896.31, samples=2 00:13:36.126 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:13:36.126 lat (usec) : 750=0.01%, 1000=0.03% 00:13:36.126 lat (msec) : 2=0.75%, 4=1.81%, 10=27.84%, 20=38.32%, 50=27.78% 00:13:36.126 lat (msec) : 100=3.47% 00:13:36.126 cpu : usr=2.19%, sys=4.27%, ctx=397, majf=0, minf=1 00:13:36.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:36.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:36.126 issued rwts: total=3584,3601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:36.126 job1: (groupid=0, jobs=1): err= 0: pid=2878654: Mon Oct 7 14:23:59 2024 00:13:36.126 read: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec) 00:13:36.126 slat (nsec): min=1004, max=10016k, avg=69379.27, stdev=510415.62 00:13:36.126 clat (usec): min=3695, max=27883, avg=8784.60, stdev=2920.90 00:13:36.126 lat (usec): min=3701, max=27892, avg=8853.97, stdev=2965.84 00:13:36.126 clat percentiles (usec): 00:13:36.126 | 1.00th=[ 4293], 5.00th=[ 5997], 10.00th=[ 6390], 20.00th=[ 6718], 00:13:36.126 | 30.00th=[ 7111], 40.00th=[ 7439], 50.00th=[ 8029], 60.00th=[ 8848], 00:13:36.126 | 70.00th=[ 9372], 80.00th=[10421], 90.00th=[11731], 95.00th=[13960], 00:13:36.126 | 99.00th=[21103], 99.50th=[24249], 99.90th=[27657], 99.95th=[27919], 00:13:36.126 | 99.99th=[27919] 00:13:36.126 write: IOPS=6953, BW=27.2MiB/s (28.5MB/s)(27.3MiB/1005msec); 0 zone resets 00:13:36.126 slat (nsec): min=1655, max=16782k, avg=71213.26, stdev=502536.22 00:13:36.126 clat (usec): min=1347, max=37120, avg=9844.43, stdev=6272.56 00:13:36.126 lat (usec): min=1355, max=37153, avg=9915.64, stdev=6311.90 00:13:36.126 clat percentiles (usec): 00:13:36.126 | 1.00th=[ 3195], 5.00th=[ 3982], 10.00th=[ 4146], 20.00th=[ 5211], 00:13:36.126 | 30.00th=[ 5997], 40.00th=[ 6587], 50.00th=[ 6849], 60.00th=[ 8356], 00:13:36.126 | 70.00th=[10945], 80.00th=[14877], 90.00th=[19792], 95.00th=[22938], 00:13:36.126 | 99.00th=[26346], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:13:36.126 | 99.99th=[36963] 00:13:36.126 bw ( KiB/s): min=24576, max=30312, per=33.17%, avg=27444.00, stdev=4055.96, samples=2 00:13:36.126 iops : min= 6144, max= 7578, avg=6861.00, stdev=1013.99, samples=2 00:13:36.126 lat (msec) : 2=0.07%, 4=2.98%, 10=69.41%, 20=22.23%, 50=5.32% 00:13:36.126 cpu : usr=5.28%, sys=7.87%, ctx=453, majf=0, minf=1 00:13:36.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:13:36.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:36.126 issued rwts: total=6656,6988,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:36.126 job2: (groupid=0, jobs=1): err= 0: pid=2878675: Mon Oct 7 14:23:59 2024 00:13:36.126 read: IOPS=6245, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1004msec) 00:13:36.126 slat (nsec): min=957, max=9219.7k, avg=75406.65, stdev=495429.43 00:13:36.126 clat (usec): min=1490, max=24611, avg=9381.97, stdev=2928.28 00:13:36.126 lat (usec): min=4077, max=24613, avg=9457.38, stdev=2953.55 00:13:36.126 clat percentiles (usec): 00:13:36.126 | 1.00th=[ 5145], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7242], 00:13:36.126 | 30.00th=[ 7570], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 9241], 00:13:36.126 | 70.00th=[10028], 80.00th=[10814], 90.00th=[13042], 95.00th=[15795], 00:13:36.126 | 99.00th=[19268], 99.50th=[22152], 99.90th=[23987], 99.95th=[24511], 00:13:36.126 | 99.99th=[24511] 00:13:36.126 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:13:36.126 slat (nsec): min=1598, max=11260k, avg=73999.58, stdev=425619.59 00:13:36.126 clat (usec): min=1188, max=24838, avg=10292.54, stdev=4151.34 00:13:36.126 lat (usec): min=1199, max=24846, avg=10366.54, stdev=4174.74 00:13:36.126 clat percentiles (usec): 00:13:36.126 | 1.00th=[ 3654], 5.00th=[ 4555], 10.00th=[ 4883], 20.00th=[ 6718], 00:13:36.126 | 30.00th=[ 7439], 40.00th=[ 8160], 50.00th=[ 9503], 60.00th=[11863], 00:13:36.126 | 70.00th=[13173], 80.00th=[13829], 90.00th=[15139], 95.00th=[16909], 00:13:36.126 | 99.00th=[20579], 99.50th=[24773], 99.90th=[24773], 99.95th=[24773], 00:13:36.126 | 99.99th=[24773] 00:13:36.126 bw ( KiB/s): min=24560, max=28672, per=32.17%, avg=26616.00, stdev=2907.62, samples=2 00:13:36.126 iops : min= 6140, max= 7168, avg=6654.00, stdev=726.91, samples=2 00:13:36.126 lat (msec) : 2=0.02%, 4=0.81%, 10=59.93%, 20=38.18%, 50=1.05% 00:13:36.126 cpu : usr=5.38%, sys=6.48%, ctx=549, majf=0, minf=2 00:13:36.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:13:36.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:36.126 issued rwts: total=6270,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:36.126 job3: (groupid=0, jobs=1): err= 0: pid=2878683: Mon Oct 7 14:23:59 2024 00:13:36.126 read: IOPS=3353, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1004msec) 00:13:36.126 slat (nsec): min=925, max=20915k, avg=122260.16, stdev=846445.61 00:13:36.126 clat (usec): min=2826, max=45959, avg=15701.59, stdev=7647.01 00:13:36.126 lat (usec): min=6012, max=45986, avg=15823.85, stdev=7718.85 00:13:36.126 clat percentiles (usec): 00:13:36.126 | 1.00th=[ 6325], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9896], 00:13:36.126 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10945], 60.00th=[15795], 00:13:36.126 | 70.00th=[19006], 80.00th=[22152], 90.00th=[27395], 95.00th=[31851], 00:13:36.126 | 99.00th=[37487], 99.50th=[37487], 99.90th=[39060], 99.95th=[40633], 00:13:36.126 | 99.99th=[45876] 00:13:36.126 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:13:36.126 slat (nsec): min=1537, max=22454k, avg=158528.53, stdev=861340.26 00:13:36.126 clat (usec): min=3635, max=58089, avg=20745.14, stdev=15162.02 00:13:36.126 lat (usec): min=4108, max=58098, avg=20903.66, stdev=15266.48 00:13:36.126 clat percentiles (usec): 00:13:36.126 | 1.00th=[ 4178], 5.00th=[ 5932], 10.00th=[ 8455], 20.00th=[ 9110], 00:13:36.126 | 30.00th=[ 9765], 40.00th=[12911], 50.00th=[15664], 60.00th=[17695], 00:13:36.126 | 70.00th=[20055], 80.00th=[33817], 90.00th=[51119], 95.00th=[53216], 00:13:36.126 | 99.00th=[56886], 99.50th=[57410], 99.90th=[57934], 99.95th=[57934], 00:13:36.126 | 99.99th=[57934] 00:13:36.126 bw ( KiB/s): min=12288, max=16384, per=17.33%, avg=14336.00, stdev=2896.31, samples=2 00:13:36.126 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:13:36.126 lat (msec) : 4=0.09%, 10=30.86%, 20=42.02%, 50=21.41%, 100=5.63% 00:13:36.126 cpu : usr=1.99%, sys=3.59%, ctx=394, majf=0, minf=1 00:13:36.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:36.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:36.126 issued rwts: total=3367,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:36.126 00:13:36.126 Run status group 0 (all jobs): 00:13:36.126 READ: bw=77.1MiB/s (80.8MB/s), 13.1MiB/s-25.9MiB/s (13.7MB/s-27.1MB/s), io=77.6MiB (81.4MB), run=1004-1007msec 00:13:36.126 WRITE: bw=80.8MiB/s (84.7MB/s), 13.9MiB/s-27.2MiB/s (14.6MB/s-28.5MB/s), io=81.4MiB (85.3MB), run=1004-1007msec 00:13:36.126 00:13:36.126 Disk stats (read/write): 00:13:36.126 nvme0n1: ios=2255/2560, merge=0/0, ticks=18177/16292, in_queue=34469, util=86.27% 00:13:36.126 nvme0n2: ios=5657/5647, merge=0/0, ticks=47815/51242, in_queue=99057, util=98.17% 00:13:36.126 nvme0n3: ios=5616/5632, merge=0/0, ticks=49330/52474, in_queue=101804, util=88.38% 00:13:36.126 nvme0n4: ios=2560/3007, merge=0/0, ticks=20369/31605, in_queue=51974, util=89.52% 00:13:36.126 14:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:36.126 14:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:36.126 14:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2878783 00:13:36.126 14:23:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:36.126 [global] 00:13:36.126 thread=1 00:13:36.126 invalidate=1 00:13:36.126 rw=read 00:13:36.126 time_based=1 00:13:36.126 runtime=10 00:13:36.126 ioengine=libaio 00:13:36.126 direct=1 00:13:36.126 bs=4096 00:13:36.126 iodepth=1 00:13:36.126 norandommap=1 00:13:36.126 numjobs=1 00:13:36.126 00:13:36.126 [job0] 00:13:36.126 filename=/dev/nvme0n1 00:13:36.126 [job1] 00:13:36.126 filename=/dev/nvme0n2 00:13:36.126 [job2] 00:13:36.126 filename=/dev/nvme0n3 00:13:36.126 [job3] 00:13:36.126 filename=/dev/nvme0n4 00:13:36.126 Could not set queue depth (nvme0n1) 00:13:36.126 Could not set queue depth (nvme0n2) 00:13:36.126 Could not set queue depth (nvme0n3) 00:13:36.126 Could not set queue depth (nvme0n4) 00:13:36.387 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:36.387 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:36.387 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:36.387 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:36.387 fio-3.35 00:13:36.387 Starting 4 threads 00:13:38.934 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:38.934 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:39.195 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=1499136, buflen=4096 00:13:39.195 fio: pid=2879168, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:39.195 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:39.195 14:24:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:39.195 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=7929856, buflen=4096 00:13:39.195 fio: pid=2879161, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:39.454 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10514432, buflen=4096 00:13:39.454 fio: pid=2879131, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:39.454 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:39.454 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:39.714 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=319488, buflen=4096 00:13:39.714 fio: pid=2879141, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:39.714 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:39.714 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:39.714 00:13:39.714 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2879131: Mon Oct 7 14:24:03 2024 00:13:39.714 read: IOPS=853, BW=3414KiB/s (3495kB/s)(10.0MiB/3008msec) 00:13:39.714 slat (usec): min=6, max=32320, avg=56.02, stdev=882.04 00:13:39.714 clat (usec): min=496, max=1437, avg=1085.08, stdev=139.04 00:13:39.714 lat (usec): min=521, max=33476, avg=1141.11, stdev=895.66 00:13:39.714 clat percentiles (usec): 00:13:39.714 | 1.00th=[ 750], 5.00th=[ 824], 10.00th=[ 865], 20.00th=[ 963], 00:13:39.714 | 30.00th=[ 1037], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1139], 00:13:39.714 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1237], 95.00th=[ 1270], 00:13:39.714 | 99.00th=[ 1336], 99.50th=[ 1352], 99.90th=[ 1401], 99.95th=[ 1418], 00:13:39.714 | 99.99th=[ 1434] 00:13:39.714 bw ( KiB/s): min= 3328, max= 3976, per=58.66%, avg=3604.80, stdev=284.01, samples=5 00:13:39.714 iops : min= 832, max= 994, avg=901.20, stdev=71.00, samples=5 00:13:39.714 lat (usec) : 500=0.04%, 750=0.90%, 1000=23.75% 00:13:39.714 lat (msec) : 2=75.27% 00:13:39.714 cpu : usr=0.86%, sys=2.63%, ctx=2572, majf=0, minf=2 00:13:39.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.714 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.714 issued rwts: total=2568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:39.714 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2879141: Mon Oct 7 14:24:03 2024 00:13:39.714 read: IOPS=24, BW=96.9KiB/s (99.2kB/s)(312KiB/3221msec) 00:13:39.714 slat (usec): min=24, max=4668, avg=90.36, stdev=522.72 00:13:39.714 clat (usec): min=1787, max=42099, avg=40911.03, stdev=4512.39 00:13:39.714 lat (usec): min=1824, max=46040, avg=41002.21, stdev=4548.79 00:13:39.714 clat percentiles (usec): 00:13:39.714 | 1.00th=[ 1795], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:39.714 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:13:39.714 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:39.714 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:39.714 | 99.99th=[42206] 00:13:39.714 bw ( KiB/s): min= 96, max= 104, per=1.58%, avg=97.33, stdev= 3.27, samples=6 00:13:39.714 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:13:39.714 lat (msec) : 2=1.27%, 50=97.47% 00:13:39.714 cpu : usr=0.09%, sys=0.00%, ctx=82, majf=0, minf=2 00:13:39.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.714 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.714 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:39.714 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2879161: Mon Oct 7 14:24:03 2024 00:13:39.714 read: IOPS=691, BW=2763KiB/s (2829kB/s)(7744KiB/2803msec) 00:13:39.714 slat (nsec): min=1960, max=134849, avg=27156.77, stdev=4853.15 00:13:39.714 clat (usec): min=406, max=41857, avg=1401.30, stdev=3851.24 00:13:39.714 lat (usec): min=433, max=41885, avg=1428.45, stdev=3851.92 00:13:39.714 clat percentiles (usec): 00:13:39.714 | 1.00th=[ 693], 5.00th=[ 857], 10.00th=[ 914], 20.00th=[ 971], 00:13:39.714 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1037], 60.00th=[ 1057], 00:13:39.714 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1156], 00:13:39.714 | 99.00th=[ 2311], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:13:39.714 | 99.99th=[41681] 00:13:39.714 bw ( KiB/s): min= 1800, max= 3808, per=50.23%, avg=3086.40, stdev=851.72, samples=5 00:13:39.714 iops : min= 450, max= 952, avg=771.60, stdev=212.93, samples=5 00:13:39.714 lat (usec) : 500=0.15%, 750=1.29%, 1000=26.02% 00:13:39.714 lat (msec) : 2=71.45%, 4=0.10%, 50=0.93% 00:13:39.714 cpu : usr=0.75%, sys=2.18%, ctx=1941, majf=0, minf=1 00:13:39.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.714 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.714 issued rwts: total=1937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:39.714 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2879168: Mon Oct 7 14:24:03 2024 00:13:39.714 read: IOPS=139, BW=558KiB/s (572kB/s)(1464KiB/2623msec) 00:13:39.714 slat (nsec): min=6680, max=41791, avg=26159.59, stdev=5050.59 00:13:39.714 clat (usec): min=689, max=41972, avg=7073.37, stdev=14323.43 00:13:39.714 lat (usec): min=722, max=41998, avg=7099.53, stdev=14323.32 00:13:39.714 clat percentiles (usec): 00:13:39.714 | 1.00th=[ 742], 5.00th=[ 857], 10.00th=[ 906], 20.00th=[ 979], 00:13:39.714 | 30.00th=[ 1020], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1123], 00:13:39.714 | 70.00th=[ 1156], 80.00th=[ 1221], 90.00th=[41157], 95.00th=[41157], 00:13:39.714 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:39.714 | 99.99th=[42206] 00:13:39.715 bw ( KiB/s): min= 96, max= 1296, per=9.44%, avg=580.80, stdev=501.57, samples=5 00:13:39.715 iops : min= 24, max= 324, avg=145.20, stdev=125.39, samples=5 00:13:39.715 lat (usec) : 750=1.09%, 1000=23.71% 00:13:39.715 lat (msec) : 2=59.67%, 4=0.27%, 50=14.99% 00:13:39.715 cpu : usr=0.19%, sys=0.53%, ctx=367, majf=0, minf=2 00:13:39.715 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:39.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.715 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.715 issued rwts: total=367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.715 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:39.715 00:13:39.715 Run status group 0 (all jobs): 00:13:39.715 READ: bw=6143KiB/s (6291kB/s), 96.9KiB/s-3414KiB/s (99.2kB/s-3495kB/s), io=19.3MiB (20.3MB), run=2623-3221msec 00:13:39.715 00:13:39.715 Disk stats (read/write): 00:13:39.715 nvme0n1: ios=2472/0, merge=0/0, ticks=2619/0, in_queue=2619, util=92.22% 00:13:39.715 nvme0n2: ios=75/0, merge=0/0, ticks=3069/0, in_queue=3069, util=95.54% 00:13:39.715 nvme0n3: ios=1970/0, merge=0/0, ticks=2624/0, in_queue=2624, util=99.11% 00:13:39.715 nvme0n4: ios=365/0, merge=0/0, ticks=2529/0, in_queue=2529, util=96.46% 00:13:39.974 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:39.974 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:40.234 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:40.234 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:40.494 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:40.494 14:24:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:40.494 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:40.494 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:40.755 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:40.755 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2878783 00:13:40.755 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:40.755 14:24:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:41.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:41.696 nvmf hotplug test: fio failed as expected 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:41.696 rmmod nvme_tcp 00:13:41.696 rmmod nvme_fabrics 00:13:41.696 rmmod nvme_keyring 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 2875250 ']' 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 2875250 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2875250 ']' 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2875250 00:13:41.696 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:13:41.697 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:41.697 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2875250 00:13:41.957 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:41.957 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:41.957 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2875250' 00:13:41.957 killing process with pid 2875250 00:13:41.957 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2875250 00:13:41.957 14:24:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2875250 00:13:42.898 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:42.898 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:42.898 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:42.898 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:13:42.898 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:13:42.898 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:42.898 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:13:42.898 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:42.898 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:42.898 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.898 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.898 14:24:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.809 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:44.809 00:13:44.809 real 0m31.281s 00:13:44.809 user 2m39.551s 00:13:44.809 sys 0m9.723s 00:13:44.809 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.809 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.809 ************************************ 00:13:44.809 END TEST nvmf_fio_target 00:13:44.809 ************************************ 00:13:44.809 14:24:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:44.809 14:24:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:44.809 14:24:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:44.809 14:24:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:44.809 ************************************ 00:13:44.809 START TEST nvmf_bdevio 00:13:44.809 ************************************ 00:13:44.809 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:45.071 * Looking for test storage... 00:13:45.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:45.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.071 --rc genhtml_branch_coverage=1 00:13:45.071 --rc genhtml_function_coverage=1 00:13:45.071 --rc genhtml_legend=1 00:13:45.071 --rc geninfo_all_blocks=1 00:13:45.071 --rc geninfo_unexecuted_blocks=1 00:13:45.071 00:13:45.071 ' 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:45.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.071 --rc genhtml_branch_coverage=1 00:13:45.071 --rc genhtml_function_coverage=1 00:13:45.071 --rc genhtml_legend=1 00:13:45.071 --rc geninfo_all_blocks=1 00:13:45.071 --rc geninfo_unexecuted_blocks=1 00:13:45.071 00:13:45.071 ' 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:45.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.071 --rc genhtml_branch_coverage=1 00:13:45.071 --rc genhtml_function_coverage=1 00:13:45.071 --rc genhtml_legend=1 00:13:45.071 --rc geninfo_all_blocks=1 00:13:45.071 --rc geninfo_unexecuted_blocks=1 00:13:45.071 00:13:45.071 ' 00:13:45.071 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:45.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.071 --rc genhtml_branch_coverage=1 00:13:45.071 --rc genhtml_function_coverage=1 00:13:45.071 --rc genhtml_legend=1 00:13:45.071 --rc geninfo_all_blocks=1 00:13:45.072 --rc geninfo_unexecuted_blocks=1 00:13:45.072 00:13:45.072 ' 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:45.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:13:45.072 14:24:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:53.216 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:53.216 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:53.216 Found net devices under 0000:31:00.0: cvl_0_0 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:53.216 Found net devices under 0000:31:00.1: cvl_0_1 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:53.216 14:24:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.216 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.216 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.216 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:53.216 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:53.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:13:53.216 00:13:53.216 --- 10.0.0.2 ping statistics --- 00:13:53.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.216 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:13:53.216 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:13:53.216 00:13:53.216 --- 10.0.0.1 ping statistics --- 00:13:53.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.216 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:13:53.216 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.216 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:13:53.216 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:53.216 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.216 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:53.217 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:53.217 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.217 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:53.217 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:53.217 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:53.217 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:53.217 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:53.217 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:53.217 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=2885269 00:13:53.217 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 2885269 00:13:53.217 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:53.217 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2885269 ']' 00:13:53.217 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.217 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:53.217 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.217 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:53.217 14:24:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:53.217 [2024-10-07 14:24:16.247472] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:13:53.217 [2024-10-07 14:24:16.247580] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.217 [2024-10-07 14:24:16.402272] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:53.217 [2024-10-07 14:24:16.620739] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.217 [2024-10-07 14:24:16.620820] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.217 [2024-10-07 14:24:16.620834] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.217 [2024-10-07 14:24:16.620849] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.217 [2024-10-07 14:24:16.620860] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.217 [2024-10-07 14:24:16.624046] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:13:53.217 [2024-10-07 14:24:16.624157] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:13:53.217 [2024-10-07 14:24:16.624472] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.217 [2024-10-07 14:24:16.624487] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:13:53.479 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:53.479 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:13:53.479 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:53.479 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:53.479 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:53.479 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.479 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.479 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.479 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:53.479 [2024-10-07 14:24:17.087738] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.479 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.479 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:53.480 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.480 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:53.480 Malloc0 00:13:53.480 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.480 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:53.480 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.480 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:53.480 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.480 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:53.480 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.480 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:53.480 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.480 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.480 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.480 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:53.741 [2024-10-07 14:24:17.195242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.741 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.741 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:53.741 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:53.741 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:13:53.741 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:13:53.741 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:13:53.741 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:13:53.742 { 00:13:53.742 "params": { 00:13:53.742 "name": "Nvme$subsystem", 00:13:53.742 "trtype": "$TEST_TRANSPORT", 00:13:53.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:53.742 "adrfam": "ipv4", 00:13:53.742 "trsvcid": "$NVMF_PORT", 00:13:53.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:53.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:53.742 "hdgst": ${hdgst:-false}, 00:13:53.742 "ddgst": ${ddgst:-false} 00:13:53.742 }, 00:13:53.742 "method": "bdev_nvme_attach_controller" 00:13:53.742 } 00:13:53.742 EOF 00:13:53.742 )") 00:13:53.742 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:13:53.742 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:13:53.742 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:13:53.742 14:24:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:13:53.742 "params": { 00:13:53.742 "name": "Nvme1", 00:13:53.742 "trtype": "tcp", 00:13:53.742 "traddr": "10.0.0.2", 00:13:53.742 "adrfam": "ipv4", 00:13:53.742 "trsvcid": "4420", 00:13:53.742 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.742 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:53.742 "hdgst": false, 00:13:53.742 "ddgst": false 00:13:53.742 }, 00:13:53.742 "method": "bdev_nvme_attach_controller" 00:13:53.742 }' 00:13:53.742 [2024-10-07 14:24:17.290097] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:13:53.742 [2024-10-07 14:24:17.290222] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885343 ] 00:13:53.742 [2024-10-07 14:24:17.419512] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:54.003 [2024-10-07 14:24:17.606207] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.003 [2024-10-07 14:24:17.606453] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.003 [2024-10-07 14:24:17.606453] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.264 I/O targets: 00:13:54.264 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:54.264 00:13:54.264 00:13:54.264 CUnit - A unit testing framework for C - Version 2.1-3 00:13:54.264 http://cunit.sourceforge.net/ 00:13:54.264 00:13:54.264 00:13:54.264 Suite: bdevio tests on: Nvme1n1 00:13:54.525 Test: blockdev write read block ...passed 00:13:54.525 Test: blockdev write zeroes read block ...passed 00:13:54.525 Test: blockdev write zeroes read no split ...passed 00:13:54.525 Test: blockdev write zeroes read split ...passed 00:13:54.525 Test: blockdev write zeroes read split partial ...passed 00:13:54.525 Test: blockdev reset ...[2024-10-07 14:24:18.203193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:54.525 [2024-10-07 14:24:18.203303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039ec00 (9): Bad file descriptor 00:13:54.525 [2024-10-07 14:24:18.216616] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:54.525 passed 00:13:54.786 Test: blockdev write read 8 blocks ...passed 00:13:54.786 Test: blockdev write read size > 128k ...passed 00:13:54.786 Test: blockdev write read invalid size ...passed 00:13:54.786 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:54.786 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:54.786 Test: blockdev write read max offset ...passed 00:13:54.786 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:54.786 Test: blockdev writev readv 8 blocks ...passed 00:13:54.786 Test: blockdev writev readv 30 x 1block ...passed 00:13:54.786 Test: blockdev writev readv block ...passed 00:13:54.786 Test: blockdev writev readv size > 128k ...passed 00:13:54.786 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:54.786 Test: blockdev comparev and writev ...[2024-10-07 14:24:18.441412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:54.786 [2024-10-07 14:24:18.441445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:54.786 [2024-10-07 14:24:18.441464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:54.786 [2024-10-07 14:24:18.441473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:54.786 [2024-10-07 14:24:18.441871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:54.786 [2024-10-07 14:24:18.441885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:54.786 [2024-10-07 14:24:18.441902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:54.786 [2024-10-07 14:24:18.441910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:54.786 [2024-10-07 14:24:18.442310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:54.786 [2024-10-07 14:24:18.442327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:54.786 [2024-10-07 14:24:18.442340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:54.786 [2024-10-07 14:24:18.442348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:54.786 [2024-10-07 14:24:18.442750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:54.786 [2024-10-07 14:24:18.442764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:54.786 [2024-10-07 14:24:18.442776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:54.786 [2024-10-07 14:24:18.442784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:54.786 passed 00:13:55.047 Test: blockdev nvme passthru rw ...passed 00:13:55.047 Test: blockdev nvme passthru vendor specific ...[2024-10-07 14:24:18.527676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:55.047 [2024-10-07 14:24:18.527698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:55.047 [2024-10-07 14:24:18.527939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:55.047 [2024-10-07 14:24:18.527950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:55.047 [2024-10-07 14:24:18.528172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:55.047 [2024-10-07 14:24:18.528184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:55.047 [2024-10-07 14:24:18.528424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:55.047 [2024-10-07 14:24:18.528436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:55.047 passed 00:13:55.047 Test: blockdev nvme admin passthru ...passed 00:13:55.047 Test: blockdev copy ...passed 00:13:55.047 00:13:55.047 Run Summary: Type Total Ran Passed Failed Inactive 00:13:55.047 suites 1 1 n/a 0 0 00:13:55.047 tests 23 23 23 0 0 00:13:55.047 asserts 152 152 152 0 n/a 00:13:55.047 00:13:55.047 Elapsed time = 1.312 seconds 00:13:55.618 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.618 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.618 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:55.618 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.618 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:55.618 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:55.618 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:55.618 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:13:55.618 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:55.618 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:13:55.618 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:55.618 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:55.618 rmmod nvme_tcp 00:13:55.618 rmmod nvme_fabrics 00:13:55.879 rmmod nvme_keyring 00:13:55.879 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:55.879 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:13:55.879 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:13:55.879 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 2885269 ']' 00:13:55.879 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 2885269 00:13:55.879 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2885269 ']' 00:13:55.879 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2885269 00:13:55.879 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:13:55.879 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:55.879 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2885269 00:13:55.879 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:13:55.879 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:13:55.879 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2885269' 00:13:55.879 killing process with pid 2885269 00:13:55.879 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2885269 00:13:55.879 14:24:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2885269 00:13:56.818 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:56.819 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:56.819 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:56.819 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:13:56.819 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:13:56.819 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:13:56.819 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:56.819 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:56.819 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:56.819 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.819 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.819 14:24:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.729 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:58.729 00:13:58.729 real 0m13.817s 00:13:58.729 user 0m19.656s 00:13:58.729 sys 0m6.432s 00:13:58.729 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:58.729 14:24:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:58.729 ************************************ 00:13:58.729 END TEST nvmf_bdevio 00:13:58.729 ************************************ 00:13:58.729 14:24:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:58.729 00:13:58.729 real 5m19.951s 00:13:58.729 user 12m20.672s 00:13:58.729 sys 1m51.082s 00:13:58.729 14:24:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:58.729 14:24:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:58.729 ************************************ 00:13:58.729 END TEST nvmf_target_core 00:13:58.729 ************************************ 00:13:58.729 14:24:22 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:58.729 14:24:22 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:58.729 14:24:22 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:58.729 14:24:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:58.729 ************************************ 00:13:58.729 START TEST nvmf_target_extra 00:13:58.729 ************************************ 00:13:58.729 14:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:58.991 * Looking for test storage... 00:13:58.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:13:58.991 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:58.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.992 --rc genhtml_branch_coverage=1 00:13:58.992 --rc genhtml_function_coverage=1 00:13:58.992 --rc genhtml_legend=1 00:13:58.992 --rc geninfo_all_blocks=1 00:13:58.992 --rc geninfo_unexecuted_blocks=1 00:13:58.992 00:13:58.992 ' 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:58.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.992 --rc genhtml_branch_coverage=1 00:13:58.992 --rc genhtml_function_coverage=1 00:13:58.992 --rc genhtml_legend=1 00:13:58.992 --rc geninfo_all_blocks=1 00:13:58.992 --rc geninfo_unexecuted_blocks=1 00:13:58.992 00:13:58.992 ' 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:58.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.992 --rc genhtml_branch_coverage=1 00:13:58.992 --rc genhtml_function_coverage=1 00:13:58.992 --rc genhtml_legend=1 00:13:58.992 --rc geninfo_all_blocks=1 00:13:58.992 --rc geninfo_unexecuted_blocks=1 00:13:58.992 00:13:58.992 ' 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:58.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.992 --rc genhtml_branch_coverage=1 00:13:58.992 --rc genhtml_function_coverage=1 00:13:58.992 --rc genhtml_legend=1 00:13:58.992 --rc geninfo_all_blocks=1 00:13:58.992 --rc geninfo_unexecuted_blocks=1 00:13:58.992 00:13:58.992 ' 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:58.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:58.992 ************************************ 00:13:58.992 START TEST nvmf_example 00:13:58.992 ************************************ 00:13:58.992 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:59.253 * Looking for test storage... 00:13:59.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:13:59.253 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:59.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.254 --rc genhtml_branch_coverage=1 00:13:59.254 --rc genhtml_function_coverage=1 00:13:59.254 --rc genhtml_legend=1 00:13:59.254 --rc geninfo_all_blocks=1 00:13:59.254 --rc geninfo_unexecuted_blocks=1 00:13:59.254 00:13:59.254 ' 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:59.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.254 --rc genhtml_branch_coverage=1 00:13:59.254 --rc genhtml_function_coverage=1 00:13:59.254 --rc genhtml_legend=1 00:13:59.254 --rc geninfo_all_blocks=1 00:13:59.254 --rc geninfo_unexecuted_blocks=1 00:13:59.254 00:13:59.254 ' 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:59.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.254 --rc genhtml_branch_coverage=1 00:13:59.254 --rc genhtml_function_coverage=1 00:13:59.254 --rc genhtml_legend=1 00:13:59.254 --rc geninfo_all_blocks=1 00:13:59.254 --rc geninfo_unexecuted_blocks=1 00:13:59.254 00:13:59.254 ' 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:59.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.254 --rc genhtml_branch_coverage=1 00:13:59.254 --rc genhtml_function_coverage=1 00:13:59.254 --rc genhtml_legend=1 00:13:59.254 --rc geninfo_all_blocks=1 00:13:59.254 --rc geninfo_unexecuted_blocks=1 00:13:59.254 00:13:59.254 ' 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:59.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:13:59.254 14:24:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:07.392 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:07.392 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:07.392 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:07.393 Found net devices under 0000:31:00.0: cvl_0_0 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:07.393 Found net devices under 0000:31:00.1: cvl_0_1 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.393 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:07.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:14:07.393 00:14:07.393 --- 10.0.0.2 ping statistics --- 00:14:07.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.393 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:14:07.393 00:14:07.393 --- 10.0.0.1 ping statistics --- 00:14:07.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.393 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2890433 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2890433 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2890433 ']' 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:07.393 14:24:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:07.653 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:07.653 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:14:07.653 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:14:07.653 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:07.653 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:14:07.654 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:19.883 Initializing NVMe Controllers 00:14:19.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:19.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:19.883 Initialization complete. Launching workers. 00:14:19.883 ======================================================== 00:14:19.883 Latency(us) 00:14:19.883 Device Information : IOPS MiB/s Average min max 00:14:19.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17291.30 67.54 3703.16 964.91 15650.24 00:14:19.883 ======================================================== 00:14:19.883 Total : 17291.30 67.54 3703.16 964.91 15650.24 00:14:19.883 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:19.883 rmmod nvme_tcp 00:14:19.883 rmmod nvme_fabrics 00:14:19.883 rmmod nvme_keyring 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 2890433 ']' 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 2890433 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2890433 ']' 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2890433 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2890433 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2890433' 00:14:19.883 killing process with pid 2890433 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2890433 00:14:19.883 14:24:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2890433 00:14:19.883 nvmf threads initialize successfully 00:14:19.883 bdev subsystem init successfully 00:14:19.883 created a nvmf target service 00:14:19.883 create targets's poll groups done 00:14:19.883 all subsystems of target started 00:14:19.883 nvmf target is running 00:14:19.883 all subsystems of target stopped 00:14:19.883 destroy targets's poll groups done 00:14:19.883 destroyed the nvmf target service 00:14:19.883 bdev subsystem finish successfully 00:14:19.883 nvmf threads destroy successfully 00:14:19.883 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:19.883 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:19.883 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:19.883 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:14:19.883 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:19.883 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:14:19.883 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:14:19.883 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:19.883 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:19.883 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.883 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.883 14:24:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:21.268 00:14:21.268 real 0m22.021s 00:14:21.268 user 0m47.794s 00:14:21.268 sys 0m7.278s 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:21.268 ************************************ 00:14:21.268 END TEST nvmf_example 00:14:21.268 ************************************ 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:21.268 ************************************ 00:14:21.268 START TEST nvmf_filesystem 00:14:21.268 ************************************ 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:21.268 * Looking for test storage... 00:14:21.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:14:21.268 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:21.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.269 --rc genhtml_branch_coverage=1 00:14:21.269 --rc genhtml_function_coverage=1 00:14:21.269 --rc genhtml_legend=1 00:14:21.269 --rc geninfo_all_blocks=1 00:14:21.269 --rc geninfo_unexecuted_blocks=1 00:14:21.269 00:14:21.269 ' 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:21.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.269 --rc genhtml_branch_coverage=1 00:14:21.269 --rc genhtml_function_coverage=1 00:14:21.269 --rc genhtml_legend=1 00:14:21.269 --rc geninfo_all_blocks=1 00:14:21.269 --rc geninfo_unexecuted_blocks=1 00:14:21.269 00:14:21.269 ' 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:21.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.269 --rc genhtml_branch_coverage=1 00:14:21.269 --rc genhtml_function_coverage=1 00:14:21.269 --rc genhtml_legend=1 00:14:21.269 --rc geninfo_all_blocks=1 00:14:21.269 --rc geninfo_unexecuted_blocks=1 00:14:21.269 00:14:21.269 ' 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:21.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.269 --rc genhtml_branch_coverage=1 00:14:21.269 --rc genhtml_function_coverage=1 00:14:21.269 --rc genhtml_legend=1 00:14:21.269 --rc geninfo_all_blocks=1 00:14:21.269 --rc geninfo_unexecuted_blocks=1 00:14:21.269 00:14:21.269 ' 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:14:21.269 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:21.270 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:14:21.535 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:21.535 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:14:21.535 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:21.535 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:21.535 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:14:21.535 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:21.535 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:21.535 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:21.535 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:21.535 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:21.535 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:21.535 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:21.535 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:14:21.535 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:21.535 #define SPDK_CONFIG_H 00:14:21.535 #define SPDK_CONFIG_AIO_FSDEV 1 00:14:21.535 #define SPDK_CONFIG_APPS 1 00:14:21.535 #define SPDK_CONFIG_ARCH native 00:14:21.535 #define SPDK_CONFIG_ASAN 1 00:14:21.535 #undef SPDK_CONFIG_AVAHI 00:14:21.535 #undef SPDK_CONFIG_CET 00:14:21.535 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:14:21.535 #define SPDK_CONFIG_COVERAGE 1 00:14:21.536 #define SPDK_CONFIG_CROSS_PREFIX 00:14:21.536 #undef SPDK_CONFIG_CRYPTO 00:14:21.536 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:21.536 #undef SPDK_CONFIG_CUSTOMOCF 00:14:21.536 #undef SPDK_CONFIG_DAOS 00:14:21.536 #define SPDK_CONFIG_DAOS_DIR 00:14:21.536 #define SPDK_CONFIG_DEBUG 1 00:14:21.536 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:21.536 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:14:21.536 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:21.536 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:21.536 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:21.536 #undef SPDK_CONFIG_DPDK_UADK 00:14:21.536 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:14:21.536 #define SPDK_CONFIG_EXAMPLES 1 00:14:21.536 #undef SPDK_CONFIG_FC 00:14:21.536 #define SPDK_CONFIG_FC_PATH 00:14:21.536 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:21.536 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:21.536 #define SPDK_CONFIG_FSDEV 1 00:14:21.536 #undef SPDK_CONFIG_FUSE 00:14:21.536 #undef SPDK_CONFIG_FUZZER 00:14:21.536 #define SPDK_CONFIG_FUZZER_LIB 00:14:21.536 #undef SPDK_CONFIG_GOLANG 00:14:21.536 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:21.536 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:21.536 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:21.536 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:21.536 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:21.536 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:21.536 #undef SPDK_CONFIG_HAVE_LZ4 00:14:21.536 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:14:21.536 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:14:21.536 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:21.536 #define SPDK_CONFIG_IDXD 1 00:14:21.536 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:21.536 #undef SPDK_CONFIG_IPSEC_MB 00:14:21.536 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:21.536 #define SPDK_CONFIG_ISAL 1 00:14:21.536 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:21.536 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:21.536 #define SPDK_CONFIG_LIBDIR 00:14:21.536 #undef SPDK_CONFIG_LTO 00:14:21.536 #define SPDK_CONFIG_MAX_LCORES 128 00:14:21.536 #define SPDK_CONFIG_NVME_CUSE 1 00:14:21.536 #undef SPDK_CONFIG_OCF 00:14:21.536 #define SPDK_CONFIG_OCF_PATH 00:14:21.536 #define SPDK_CONFIG_OPENSSL_PATH 00:14:21.536 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:21.536 #define SPDK_CONFIG_PGO_DIR 00:14:21.536 #undef SPDK_CONFIG_PGO_USE 00:14:21.536 #define SPDK_CONFIG_PREFIX /usr/local 00:14:21.536 #undef SPDK_CONFIG_RAID5F 00:14:21.536 #undef SPDK_CONFIG_RBD 00:14:21.536 #define SPDK_CONFIG_RDMA 1 00:14:21.536 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:21.536 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:21.536 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:21.536 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:21.536 #define SPDK_CONFIG_SHARED 1 00:14:21.536 #undef SPDK_CONFIG_SMA 00:14:21.536 #define SPDK_CONFIG_TESTS 1 00:14:21.536 #undef SPDK_CONFIG_TSAN 00:14:21.536 #define SPDK_CONFIG_UBLK 1 00:14:21.536 #define SPDK_CONFIG_UBSAN 1 00:14:21.536 #undef SPDK_CONFIG_UNIT_TESTS 00:14:21.536 #undef SPDK_CONFIG_URING 00:14:21.536 #define SPDK_CONFIG_URING_PATH 00:14:21.536 #undef SPDK_CONFIG_URING_ZNS 00:14:21.536 #undef SPDK_CONFIG_USDT 00:14:21.536 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:21.536 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:21.536 #undef SPDK_CONFIG_VFIO_USER 00:14:21.536 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:21.536 #define SPDK_CONFIG_VHOST 1 00:14:21.536 #define SPDK_CONFIG_VIRTIO 1 00:14:21.536 #undef SPDK_CONFIG_VTUNE 00:14:21.536 #define SPDK_CONFIG_VTUNE_DIR 00:14:21.536 #define SPDK_CONFIG_WERROR 1 00:14:21.536 #define SPDK_CONFIG_WPDK_DIR 00:14:21.536 #undef SPDK_CONFIG_XNVME 00:14:21.536 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:21.536 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:21.536 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.536 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:14:21.536 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.536 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.536 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.536 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.536 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.536 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.536 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:21.536 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.536 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:21.536 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:14:21.536 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:14:21.536 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:21.537 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2893283 ]] 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2893283 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.hOLspX 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.hOLspX/tests/target /tmp/spdk.hOLspX 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:14:21.538 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=156295168 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5128134656 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=122299453440 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356529664 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=7057076224 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64666898432 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678264832 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847889920 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871306752 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23416832 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=175104 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=328704 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677584896 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678264832 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=679936 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:14:21.539 * Looking for test storage... 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=122299453440 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9271668736 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:14:21.539 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:21.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.540 --rc genhtml_branch_coverage=1 00:14:21.540 --rc genhtml_function_coverage=1 00:14:21.540 --rc genhtml_legend=1 00:14:21.540 --rc geninfo_all_blocks=1 00:14:21.540 --rc geninfo_unexecuted_blocks=1 00:14:21.540 00:14:21.540 ' 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:21.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.540 --rc genhtml_branch_coverage=1 00:14:21.540 --rc genhtml_function_coverage=1 00:14:21.540 --rc genhtml_legend=1 00:14:21.540 --rc geninfo_all_blocks=1 00:14:21.540 --rc geninfo_unexecuted_blocks=1 00:14:21.540 00:14:21.540 ' 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:21.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.540 --rc genhtml_branch_coverage=1 00:14:21.540 --rc genhtml_function_coverage=1 00:14:21.540 --rc genhtml_legend=1 00:14:21.540 --rc geninfo_all_blocks=1 00:14:21.540 --rc geninfo_unexecuted_blocks=1 00:14:21.540 00:14:21.540 ' 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:21.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.540 --rc genhtml_branch_coverage=1 00:14:21.540 --rc genhtml_function_coverage=1 00:14:21.540 --rc genhtml_legend=1 00:14:21.540 --rc geninfo_all_blocks=1 00:14:21.540 --rc geninfo_unexecuted_blocks=1 00:14:21.540 00:14:21.540 ' 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.540 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:21.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:14:21.801 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.940 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:29.941 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:29.941 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:29.941 Found net devices under 0000:31:00.0: cvl_0_0 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:29.941 Found net devices under 0000:31:00.1: cvl_0_1 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:29.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:14:29.941 00:14:29.941 --- 10.0.0.2 ping statistics --- 00:14:29.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.941 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:14:29.941 00:14:29.941 --- 10.0.0.1 ping statistics --- 00:14:29.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.941 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:29.941 ************************************ 00:14:29.941 START TEST nvmf_filesystem_no_in_capsule 00:14:29.941 ************************************ 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=2897245 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 2897245 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2897245 ']' 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:29.941 14:24:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:29.942 [2024-10-07 14:24:52.904441] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:14:29.942 [2024-10-07 14:24:52.904543] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.942 [2024-10-07 14:24:53.034082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:29.942 [2024-10-07 14:24:53.213690] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.942 [2024-10-07 14:24:53.213741] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.942 [2024-10-07 14:24:53.213754] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.942 [2024-10-07 14:24:53.213767] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.942 [2024-10-07 14:24:53.213777] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.942 [2024-10-07 14:24:53.216152] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.942 [2024-10-07 14:24:53.216234] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.942 [2024-10-07 14:24:53.216353] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.942 [2024-10-07 14:24:53.216377] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.203 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:30.203 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:14:30.203 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:30.203 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:30.203 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.203 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.203 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:30.203 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:30.203 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.203 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.203 [2024-10-07 14:24:53.724715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.203 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.203 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:30.203 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.203 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.464 Malloc1 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.464 [2024-10-07 14:24:54.160375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.464 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.725 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.725 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:30.725 { 00:14:30.725 "name": "Malloc1", 00:14:30.725 "aliases": [ 00:14:30.725 "fcc90c85-2483-494e-9af5-443f0362f926" 00:14:30.725 ], 00:14:30.725 "product_name": "Malloc disk", 00:14:30.725 "block_size": 512, 00:14:30.725 "num_blocks": 1048576, 00:14:30.725 "uuid": "fcc90c85-2483-494e-9af5-443f0362f926", 00:14:30.725 "assigned_rate_limits": { 00:14:30.725 "rw_ios_per_sec": 0, 00:14:30.725 "rw_mbytes_per_sec": 0, 00:14:30.725 "r_mbytes_per_sec": 0, 00:14:30.725 "w_mbytes_per_sec": 0 00:14:30.725 }, 00:14:30.725 "claimed": true, 00:14:30.725 "claim_type": "exclusive_write", 00:14:30.725 "zoned": false, 00:14:30.725 "supported_io_types": { 00:14:30.725 "read": true, 00:14:30.725 "write": true, 00:14:30.725 "unmap": true, 00:14:30.725 "flush": true, 00:14:30.725 "reset": true, 00:14:30.725 "nvme_admin": false, 00:14:30.725 "nvme_io": false, 00:14:30.725 "nvme_io_md": false, 00:14:30.725 "write_zeroes": true, 00:14:30.725 "zcopy": true, 00:14:30.725 "get_zone_info": false, 00:14:30.725 "zone_management": false, 00:14:30.725 "zone_append": false, 00:14:30.725 "compare": false, 00:14:30.725 "compare_and_write": false, 00:14:30.725 "abort": true, 00:14:30.725 "seek_hole": false, 00:14:30.725 "seek_data": false, 00:14:30.725 "copy": true, 00:14:30.725 "nvme_iov_md": false 00:14:30.725 }, 00:14:30.725 "memory_domains": [ 00:14:30.725 { 00:14:30.725 "dma_device_id": "system", 00:14:30.725 "dma_device_type": 1 00:14:30.725 }, 00:14:30.725 { 00:14:30.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.725 "dma_device_type": 2 00:14:30.725 } 00:14:30.725 ], 00:14:30.725 "driver_specific": {} 00:14:30.725 } 00:14:30.725 ]' 00:14:30.725 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:30.725 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:14:30.725 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:30.725 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:14:30.725 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:14:30.725 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:14:30.725 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:30.725 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:32.637 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:32.637 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:14:32.637 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:32.637 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:32.637 14:24:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:14:34.549 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:34.549 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:34.549 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:34.549 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:34.549 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:34.549 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:14:34.549 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:34.549 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:34.549 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:34.549 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:34.549 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:34.549 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:34.549 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:34.549 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:34.549 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:34.549 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:34.549 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:34.549 14:24:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:34.816 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:35.758 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:14:35.758 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:35.758 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:35.758 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:35.758 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:36.018 ************************************ 00:14:36.018 START TEST filesystem_ext4 00:14:36.018 ************************************ 00:14:36.018 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:36.018 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:36.018 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:36.018 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:36.018 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:14:36.018 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:36.018 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:14:36.018 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:14:36.018 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:14:36.018 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:14:36.018 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:36.018 mke2fs 1.47.0 (5-Feb-2023) 00:14:36.018 Discarding device blocks: 0/522240 done 00:14:36.018 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:36.018 Filesystem UUID: a230941d-70aa-4971-8a98-ed1d039f0f37 00:14:36.018 Superblock backups stored on blocks: 00:14:36.018 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:36.018 00:14:36.018 Allocating group tables: 0/64 done 00:14:36.018 Writing inode tables: 0/64 done 00:14:39.316 Creating journal (8192 blocks): done 00:14:39.316 Writing superblocks and filesystem accounting information: 0/64 done 00:14:39.316 00:14:39.316 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:14:39.316 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:44.605 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:44.605 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:14:44.605 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:44.605 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:14:44.605 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:44.605 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:44.605 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2897245 00:14:44.605 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:44.605 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:44.605 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:44.605 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:44.605 00:14:44.605 real 0m8.812s 00:14:44.605 user 0m0.032s 00:14:44.605 sys 0m0.078s 00:14:44.605 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:44.605 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:44.605 ************************************ 00:14:44.605 END TEST filesystem_ext4 00:14:44.605 ************************************ 00:14:44.865 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:44.865 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:44.865 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:44.865 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:44.865 ************************************ 00:14:44.865 START TEST filesystem_btrfs 00:14:44.865 ************************************ 00:14:44.865 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:44.865 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:44.865 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:44.865 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:44.865 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:14:44.865 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:44.865 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:14:44.865 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:14:44.865 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:14:44.865 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:14:44.865 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:45.126 btrfs-progs v6.8.1 00:14:45.126 See https://btrfs.readthedocs.io for more information. 00:14:45.126 00:14:45.126 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:45.126 NOTE: several default settings have changed in version 5.15, please make sure 00:14:45.126 this does not affect your deployments: 00:14:45.126 - DUP for metadata (-m dup) 00:14:45.126 - enabled no-holes (-O no-holes) 00:14:45.126 - enabled free-space-tree (-R free-space-tree) 00:14:45.126 00:14:45.126 Label: (null) 00:14:45.126 UUID: f09678c2-703a-4938-86b6-e1433d076086 00:14:45.126 Node size: 16384 00:14:45.126 Sector size: 4096 (CPU page size: 4096) 00:14:45.126 Filesystem size: 510.00MiB 00:14:45.126 Block group profiles: 00:14:45.126 Data: single 8.00MiB 00:14:45.126 Metadata: DUP 32.00MiB 00:14:45.126 System: DUP 8.00MiB 00:14:45.126 SSD detected: yes 00:14:45.126 Zoned device: no 00:14:45.126 Features: extref, skinny-metadata, no-holes, free-space-tree 00:14:45.126 Checksum: crc32c 00:14:45.126 Number of devices: 1 00:14:45.126 Devices: 00:14:45.126 ID SIZE PATH 00:14:45.126 1 510.00MiB /dev/nvme0n1p1 00:14:45.126 00:14:45.126 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:14:45.126 14:25:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:45.387 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:45.387 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2897245 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:45.647 00:14:45.647 real 0m0.779s 00:14:45.647 user 0m0.027s 00:14:45.647 sys 0m0.123s 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:45.647 ************************************ 00:14:45.647 END TEST filesystem_btrfs 00:14:45.647 ************************************ 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:45.647 ************************************ 00:14:45.647 START TEST filesystem_xfs 00:14:45.647 ************************************ 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:14:45.647 14:25:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:45.647 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:45.647 = sectsz=512 attr=2, projid32bit=1 00:14:45.647 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:45.647 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:45.647 data = bsize=4096 blocks=130560, imaxpct=25 00:14:45.647 = sunit=0 swidth=0 blks 00:14:45.647 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:45.647 log =internal log bsize=4096 blocks=16384, version=2 00:14:45.647 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:45.647 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:47.033 Discarding blocks...Done. 00:14:47.033 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:14:47.033 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2897245 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:48.948 00:14:48.948 real 0m3.019s 00:14:48.948 user 0m0.030s 00:14:48.948 sys 0m0.074s 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:48.948 ************************************ 00:14:48.948 END TEST filesystem_xfs 00:14:48.948 ************************************ 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:48.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:48.948 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2897245 00:14:48.949 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2897245 ']' 00:14:48.949 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2897245 00:14:48.949 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:14:48.949 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.949 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2897245 00:14:48.949 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:48.949 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:48.949 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2897245' 00:14:48.949 killing process with pid 2897245 00:14:48.949 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2897245 00:14:48.949 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2897245 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:50.862 00:14:50.862 real 0m21.627s 00:14:50.862 user 1m23.725s 00:14:50.862 sys 0m1.561s 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:50.862 ************************************ 00:14:50.862 END TEST nvmf_filesystem_no_in_capsule 00:14:50.862 ************************************ 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:50.862 ************************************ 00:14:50.862 START TEST nvmf_filesystem_in_capsule 00:14:50.862 ************************************ 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=2901651 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 2901651 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2901651 ']' 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:50.862 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.863 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:50.863 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:50.863 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:51.123 [2024-10-07 14:25:14.611986] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:14:51.123 [2024-10-07 14:25:14.612103] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.123 [2024-10-07 14:25:14.747479] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:51.384 [2024-10-07 14:25:14.930091] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.384 [2024-10-07 14:25:14.930139] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.384 [2024-10-07 14:25:14.930151] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.384 [2024-10-07 14:25:14.930163] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.384 [2024-10-07 14:25:14.930173] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.384 [2024-10-07 14:25:14.932401] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.384 [2024-10-07 14:25:14.932485] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.384 [2024-10-07 14:25:14.932603] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.384 [2024-10-07 14:25:14.932624] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:51.954 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:51.954 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:14:51.954 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:51.954 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:51.954 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:51.954 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.955 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:51.955 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:14:51.955 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.955 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:51.955 [2024-10-07 14:25:15.424285] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.955 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.955 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:51.955 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.955 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:52.215 Malloc1 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:52.215 [2024-10-07 14:25:15.860204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.215 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:52.215 { 00:14:52.215 "name": "Malloc1", 00:14:52.215 "aliases": [ 00:14:52.215 "293875b8-5b84-4ac8-997c-90cba87d8a6b" 00:14:52.215 ], 00:14:52.215 "product_name": "Malloc disk", 00:14:52.215 "block_size": 512, 00:14:52.215 "num_blocks": 1048576, 00:14:52.215 "uuid": "293875b8-5b84-4ac8-997c-90cba87d8a6b", 00:14:52.215 "assigned_rate_limits": { 00:14:52.215 "rw_ios_per_sec": 0, 00:14:52.215 "rw_mbytes_per_sec": 0, 00:14:52.215 "r_mbytes_per_sec": 0, 00:14:52.215 "w_mbytes_per_sec": 0 00:14:52.215 }, 00:14:52.215 "claimed": true, 00:14:52.215 "claim_type": "exclusive_write", 00:14:52.215 "zoned": false, 00:14:52.215 "supported_io_types": { 00:14:52.215 "read": true, 00:14:52.215 "write": true, 00:14:52.215 "unmap": true, 00:14:52.215 "flush": true, 00:14:52.215 "reset": true, 00:14:52.215 "nvme_admin": false, 00:14:52.215 "nvme_io": false, 00:14:52.216 "nvme_io_md": false, 00:14:52.216 "write_zeroes": true, 00:14:52.216 "zcopy": true, 00:14:52.216 "get_zone_info": false, 00:14:52.216 "zone_management": false, 00:14:52.216 "zone_append": false, 00:14:52.216 "compare": false, 00:14:52.216 "compare_and_write": false, 00:14:52.216 "abort": true, 00:14:52.216 "seek_hole": false, 00:14:52.216 "seek_data": false, 00:14:52.216 "copy": true, 00:14:52.216 "nvme_iov_md": false 00:14:52.216 }, 00:14:52.216 "memory_domains": [ 00:14:52.216 { 00:14:52.216 "dma_device_id": "system", 00:14:52.216 "dma_device_type": 1 00:14:52.216 }, 00:14:52.216 { 00:14:52.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.216 "dma_device_type": 2 00:14:52.216 } 00:14:52.216 ], 00:14:52.216 "driver_specific": {} 00:14:52.216 } 00:14:52.216 ]' 00:14:52.216 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:52.476 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:14:52.476 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:52.476 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:14:52.476 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:14:52.476 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:14:52.476 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:52.476 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:53.859 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:53.859 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:14:53.859 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:53.859 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:53.859 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:14:56.403 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:56.403 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:56.403 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:56.403 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:56.403 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:56.403 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:14:56.403 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:56.403 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:56.403 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:56.403 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:56.403 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:56.403 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:56.403 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:56.403 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:56.403 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:56.403 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:56.403 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:56.403 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:56.663 14:25:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:57.604 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:14:57.604 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:57.604 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:57.604 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:57.604 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:57.604 ************************************ 00:14:57.604 START TEST filesystem_in_capsule_ext4 00:14:57.604 ************************************ 00:14:57.604 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:57.604 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:57.604 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:57.604 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:57.604 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:14:57.604 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:57.604 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:14:57.604 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:14:57.604 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:14:57.604 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:14:57.604 14:25:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:57.604 mke2fs 1.47.0 (5-Feb-2023) 00:14:57.604 Discarding device blocks: 0/522240 done 00:14:57.604 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:57.604 Filesystem UUID: 25dd9ce2-7d7d-4825-88c5-c108680700b3 00:14:57.604 Superblock backups stored on blocks: 00:14:57.604 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:57.604 00:14:57.604 Allocating group tables: 0/64 done 00:14:57.604 Writing inode tables: 0/64 done 00:14:57.864 Creating journal (8192 blocks): done 00:15:00.080 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:15:00.080 00:15:00.080 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:15:00.080 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2901651 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:06.849 00:15:06.849 real 0m8.599s 00:15:06.849 user 0m0.024s 00:15:06.849 sys 0m0.085s 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:06.849 ************************************ 00:15:06.849 END TEST filesystem_in_capsule_ext4 00:15:06.849 ************************************ 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:06.849 ************************************ 00:15:06.849 START TEST filesystem_in_capsule_btrfs 00:15:06.849 ************************************ 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:15:06.849 14:25:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:06.849 btrfs-progs v6.8.1 00:15:06.849 See https://btrfs.readthedocs.io for more information. 00:15:06.849 00:15:06.849 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:06.849 NOTE: several default settings have changed in version 5.15, please make sure 00:15:06.849 this does not affect your deployments: 00:15:06.849 - DUP for metadata (-m dup) 00:15:06.849 - enabled no-holes (-O no-holes) 00:15:06.849 - enabled free-space-tree (-R free-space-tree) 00:15:06.849 00:15:06.849 Label: (null) 00:15:06.849 UUID: dcc3b9e2-c6bb-4c9e-a1c5-91f5647d924b 00:15:06.849 Node size: 16384 00:15:06.849 Sector size: 4096 (CPU page size: 4096) 00:15:06.849 Filesystem size: 510.00MiB 00:15:06.849 Block group profiles: 00:15:06.849 Data: single 8.00MiB 00:15:06.849 Metadata: DUP 32.00MiB 00:15:06.849 System: DUP 8.00MiB 00:15:06.849 SSD detected: yes 00:15:06.849 Zoned device: no 00:15:06.849 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:06.849 Checksum: crc32c 00:15:06.849 Number of devices: 1 00:15:06.849 Devices: 00:15:06.849 ID SIZE PATH 00:15:06.849 1 510.00MiB /dev/nvme0n1p1 00:15:06.849 00:15:06.849 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:15:06.849 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:07.110 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:07.110 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:15:07.110 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:07.110 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:15:07.110 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:07.110 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:07.110 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2901651 00:15:07.110 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:07.110 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:07.110 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:07.110 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:07.110 00:15:07.110 real 0m0.949s 00:15:07.110 user 0m0.031s 00:15:07.110 sys 0m0.120s 00:15:07.110 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:07.110 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:07.110 ************************************ 00:15:07.110 END TEST filesystem_in_capsule_btrfs 00:15:07.110 ************************************ 00:15:07.372 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:15:07.372 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:07.372 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:07.372 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:07.372 ************************************ 00:15:07.372 START TEST filesystem_in_capsule_xfs 00:15:07.372 ************************************ 00:15:07.372 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:15:07.372 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:07.372 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:07.372 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:07.372 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:15:07.372 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:15:07.372 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:15:07.372 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:15:07.372 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:15:07.372 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:15:07.372 14:25:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:07.372 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:07.372 = sectsz=512 attr=2, projid32bit=1 00:15:07.372 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:07.372 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:07.372 data = bsize=4096 blocks=130560, imaxpct=25 00:15:07.372 = sunit=0 swidth=0 blks 00:15:07.372 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:07.372 log =internal log bsize=4096 blocks=16384, version=2 00:15:07.372 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:07.372 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:08.316 Discarding blocks...Done. 00:15:08.316 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:15:08.316 14:25:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:10.861 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:11.122 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:15:11.122 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:11.122 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:15:11.122 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:15:11.122 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:11.122 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2901651 00:15:11.122 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:11.122 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:11.122 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:11.122 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:11.122 00:15:11.122 real 0m3.807s 00:15:11.122 user 0m0.025s 00:15:11.122 sys 0m0.083s 00:15:11.122 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:11.122 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:11.122 ************************************ 00:15:11.122 END TEST filesystem_in_capsule_xfs 00:15:11.122 ************************************ 00:15:11.122 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:11.383 14:25:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:11.383 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:11.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.644 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:11.644 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:15:11.644 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:11.644 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.644 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:11.644 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.644 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:15:11.644 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.644 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.644 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:11.644 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.644 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:11.644 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2901651 00:15:11.644 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2901651 ']' 00:15:11.645 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2901651 00:15:11.645 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:15:11.645 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:11.645 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2901651 00:15:11.906 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:11.906 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:11.906 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2901651' 00:15:11.906 killing process with pid 2901651 00:15:11.906 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2901651 00:15:11.906 14:25:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2901651 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:13.876 00:15:13.876 real 0m22.638s 00:15:13.876 user 1m27.635s 00:15:13.876 sys 0m1.646s 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:13.876 ************************************ 00:15:13.876 END TEST nvmf_filesystem_in_capsule 00:15:13.876 ************************************ 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:13.876 rmmod nvme_tcp 00:15:13.876 rmmod nvme_fabrics 00:15:13.876 rmmod nvme_keyring 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:15:13.876 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:15:13.877 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:13.877 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:15:13.877 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:13.877 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:13.877 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.877 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:13.877 14:25:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.787 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:15.787 00:15:15.787 real 0m54.601s 00:15:15.787 user 2m53.816s 00:15:15.787 sys 0m9.031s 00:15:15.788 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:15.788 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:15.788 ************************************ 00:15:15.788 END TEST nvmf_filesystem 00:15:15.788 ************************************ 00:15:15.788 14:25:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:15.788 14:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:15.788 14:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:15.788 14:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:15.788 ************************************ 00:15:15.788 START TEST nvmf_target_discovery 00:15:15.788 ************************************ 00:15:15.788 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:16.048 * Looking for test storage... 00:15:16.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.048 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:16.048 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:15:16.048 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:16.048 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:16.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.049 --rc genhtml_branch_coverage=1 00:15:16.049 --rc genhtml_function_coverage=1 00:15:16.049 --rc genhtml_legend=1 00:15:16.049 --rc geninfo_all_blocks=1 00:15:16.049 --rc geninfo_unexecuted_blocks=1 00:15:16.049 00:15:16.049 ' 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:16.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.049 --rc genhtml_branch_coverage=1 00:15:16.049 --rc genhtml_function_coverage=1 00:15:16.049 --rc genhtml_legend=1 00:15:16.049 --rc geninfo_all_blocks=1 00:15:16.049 --rc geninfo_unexecuted_blocks=1 00:15:16.049 00:15:16.049 ' 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:16.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.049 --rc genhtml_branch_coverage=1 00:15:16.049 --rc genhtml_function_coverage=1 00:15:16.049 --rc genhtml_legend=1 00:15:16.049 --rc geninfo_all_blocks=1 00:15:16.049 --rc geninfo_unexecuted_blocks=1 00:15:16.049 00:15:16.049 ' 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:16.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.049 --rc genhtml_branch_coverage=1 00:15:16.049 --rc genhtml_function_coverage=1 00:15:16.049 --rc genhtml_legend=1 00:15:16.049 --rc geninfo_all_blocks=1 00:15:16.049 --rc geninfo_unexecuted_blocks=1 00:15:16.049 00:15:16.049 ' 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:16.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:16.049 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.050 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:16.050 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:16.050 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:16.050 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.050 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:16.050 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.050 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:16.050 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:16.050 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:15:16.050 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:24.186 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:24.186 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:24.186 Found net devices under 0000:31:00.0: cvl_0_0 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:24.186 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:24.187 Found net devices under 0000:31:00.1: cvl_0_1 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:24.187 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:24.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:15:24.187 00:15:24.187 --- 10.0.0.2 ping statistics --- 00:15:24.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.187 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:24.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:15:24.187 00:15:24.187 --- 10.0.0.1 ping statistics --- 00:15:24.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.187 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=2910499 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 2910499 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2910499 ']' 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.187 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.187 [2024-10-07 14:25:47.173865] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:15:24.187 [2024-10-07 14:25:47.173967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.187 [2024-10-07 14:25:47.300592] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:24.187 [2024-10-07 14:25:47.479684] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.187 [2024-10-07 14:25:47.479735] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.187 [2024-10-07 14:25:47.479747] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.187 [2024-10-07 14:25:47.479759] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.187 [2024-10-07 14:25:47.479769] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.187 [2024-10-07 14:25:47.482064] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.187 [2024-10-07 14:25:47.482187] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:24.187 [2024-10-07 14:25:47.482312] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.187 [2024-10-07 14:25:47.482328] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:24.449 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:24.449 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:15:24.449 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:24.449 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:24.449 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.449 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:24.449 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.449 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 [2024-10-07 14:25:47.980860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:24.449 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.449 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:15:24.449 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:24.449 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:15:24.449 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.449 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 Null1 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 [2024-10-07 14:25:48.041272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 Null2 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 Null3 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 Null4 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.449 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:15:24.711 00:15:24.711 Discovery Log Number of Records 6, Generation counter 6 00:15:24.711 =====Discovery Log Entry 0====== 00:15:24.711 trtype: tcp 00:15:24.711 adrfam: ipv4 00:15:24.711 subtype: current discovery subsystem 00:15:24.711 treq: not required 00:15:24.711 portid: 0 00:15:24.711 trsvcid: 4420 00:15:24.711 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:24.711 traddr: 10.0.0.2 00:15:24.711 eflags: explicit discovery connections, duplicate discovery information 00:15:24.711 sectype: none 00:15:24.711 =====Discovery Log Entry 1====== 00:15:24.711 trtype: tcp 00:15:24.711 adrfam: ipv4 00:15:24.711 subtype: nvme subsystem 00:15:24.711 treq: not required 00:15:24.711 portid: 0 00:15:24.711 trsvcid: 4420 00:15:24.711 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:24.711 traddr: 10.0.0.2 00:15:24.711 eflags: none 00:15:24.711 sectype: none 00:15:24.711 =====Discovery Log Entry 2====== 00:15:24.711 trtype: tcp 00:15:24.711 adrfam: ipv4 00:15:24.711 subtype: nvme subsystem 00:15:24.711 treq: not required 00:15:24.711 portid: 0 00:15:24.711 trsvcid: 4420 00:15:24.711 subnqn: nqn.2016-06.io.spdk:cnode2 00:15:24.711 traddr: 10.0.0.2 00:15:24.711 eflags: none 00:15:24.711 sectype: none 00:15:24.711 =====Discovery Log Entry 3====== 00:15:24.711 trtype: tcp 00:15:24.711 adrfam: ipv4 00:15:24.711 subtype: nvme subsystem 00:15:24.711 treq: not required 00:15:24.711 portid: 0 00:15:24.711 trsvcid: 4420 00:15:24.711 subnqn: nqn.2016-06.io.spdk:cnode3 00:15:24.711 traddr: 10.0.0.2 00:15:24.711 eflags: none 00:15:24.711 sectype: none 00:15:24.711 =====Discovery Log Entry 4====== 00:15:24.711 trtype: tcp 00:15:24.711 adrfam: ipv4 00:15:24.711 subtype: nvme subsystem 00:15:24.711 treq: not required 00:15:24.711 portid: 0 00:15:24.711 trsvcid: 4420 00:15:24.711 subnqn: nqn.2016-06.io.spdk:cnode4 00:15:24.711 traddr: 10.0.0.2 00:15:24.711 eflags: none 00:15:24.711 sectype: none 00:15:24.711 =====Discovery Log Entry 5====== 00:15:24.711 trtype: tcp 00:15:24.711 adrfam: ipv4 00:15:24.711 subtype: discovery subsystem referral 00:15:24.711 treq: not required 00:15:24.711 portid: 0 00:15:24.711 trsvcid: 4430 00:15:24.711 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:24.711 traddr: 10.0.0.2 00:15:24.711 eflags: none 00:15:24.711 sectype: none 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:15:24.711 Perform nvmf subsystem discovery via RPC 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.711 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.711 [ 00:15:24.711 { 00:15:24.711 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:24.711 "subtype": "Discovery", 00:15:24.711 "listen_addresses": [ 00:15:24.711 { 00:15:24.711 "trtype": "TCP", 00:15:24.711 "adrfam": "IPv4", 00:15:24.711 "traddr": "10.0.0.2", 00:15:24.711 "trsvcid": "4420" 00:15:24.711 } 00:15:24.711 ], 00:15:24.711 "allow_any_host": true, 00:15:24.711 "hosts": [] 00:15:24.711 }, 00:15:24.711 { 00:15:24.711 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:24.711 "subtype": "NVMe", 00:15:24.711 "listen_addresses": [ 00:15:24.711 { 00:15:24.711 "trtype": "TCP", 00:15:24.711 "adrfam": "IPv4", 00:15:24.711 "traddr": "10.0.0.2", 00:15:24.711 "trsvcid": "4420" 00:15:24.711 } 00:15:24.711 ], 00:15:24.711 "allow_any_host": true, 00:15:24.711 "hosts": [], 00:15:24.711 "serial_number": "SPDK00000000000001", 00:15:24.711 "model_number": "SPDK bdev Controller", 00:15:24.711 "max_namespaces": 32, 00:15:24.711 "min_cntlid": 1, 00:15:24.711 "max_cntlid": 65519, 00:15:24.711 "namespaces": [ 00:15:24.711 { 00:15:24.711 "nsid": 1, 00:15:24.711 "bdev_name": "Null1", 00:15:24.711 "name": "Null1", 00:15:24.711 "nguid": "C2590FF7D01149718E49C20C5AB8102B", 00:15:24.711 "uuid": "c2590ff7-d011-4971-8e49-c20c5ab8102b" 00:15:24.711 } 00:15:24.711 ] 00:15:24.711 }, 00:15:24.711 { 00:15:24.711 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:24.711 "subtype": "NVMe", 00:15:24.711 "listen_addresses": [ 00:15:24.711 { 00:15:24.711 "trtype": "TCP", 00:15:24.711 "adrfam": "IPv4", 00:15:24.711 "traddr": "10.0.0.2", 00:15:24.711 "trsvcid": "4420" 00:15:24.711 } 00:15:24.711 ], 00:15:24.711 "allow_any_host": true, 00:15:24.711 "hosts": [], 00:15:24.711 "serial_number": "SPDK00000000000002", 00:15:24.711 "model_number": "SPDK bdev Controller", 00:15:24.711 "max_namespaces": 32, 00:15:24.711 "min_cntlid": 1, 00:15:24.711 "max_cntlid": 65519, 00:15:24.711 "namespaces": [ 00:15:24.711 { 00:15:24.711 "nsid": 1, 00:15:24.711 "bdev_name": "Null2", 00:15:24.711 "name": "Null2", 00:15:24.711 "nguid": "4B9D6B00AA624406AD101AF131D50CFA", 00:15:24.712 "uuid": "4b9d6b00-aa62-4406-ad10-1af131d50cfa" 00:15:24.712 } 00:15:24.712 ] 00:15:24.712 }, 00:15:24.712 { 00:15:24.712 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:15:24.712 "subtype": "NVMe", 00:15:24.712 "listen_addresses": [ 00:15:24.712 { 00:15:24.712 "trtype": "TCP", 00:15:24.712 "adrfam": "IPv4", 00:15:24.712 "traddr": "10.0.0.2", 00:15:24.712 "trsvcid": "4420" 00:15:24.712 } 00:15:24.712 ], 00:15:24.712 "allow_any_host": true, 00:15:24.712 "hosts": [], 00:15:24.712 "serial_number": "SPDK00000000000003", 00:15:24.712 "model_number": "SPDK bdev Controller", 00:15:24.712 "max_namespaces": 32, 00:15:24.712 "min_cntlid": 1, 00:15:24.712 "max_cntlid": 65519, 00:15:24.712 "namespaces": [ 00:15:24.712 { 00:15:24.712 "nsid": 1, 00:15:24.712 "bdev_name": "Null3", 00:15:24.712 "name": "Null3", 00:15:24.712 "nguid": "F026C142E0E94FDABE88B630926EFB89", 00:15:24.973 "uuid": "f026c142-e0e9-4fda-be88-b630926efb89" 00:15:24.973 } 00:15:24.973 ] 00:15:24.973 }, 00:15:24.973 { 00:15:24.973 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:15:24.973 "subtype": "NVMe", 00:15:24.973 "listen_addresses": [ 00:15:24.973 { 00:15:24.973 "trtype": "TCP", 00:15:24.973 "adrfam": "IPv4", 00:15:24.973 "traddr": "10.0.0.2", 00:15:24.973 "trsvcid": "4420" 00:15:24.973 } 00:15:24.973 ], 00:15:24.973 "allow_any_host": true, 00:15:24.973 "hosts": [], 00:15:24.973 "serial_number": "SPDK00000000000004", 00:15:24.973 "model_number": "SPDK bdev Controller", 00:15:24.973 "max_namespaces": 32, 00:15:24.973 "min_cntlid": 1, 00:15:24.973 "max_cntlid": 65519, 00:15:24.973 "namespaces": [ 00:15:24.973 { 00:15:24.973 "nsid": 1, 00:15:24.973 "bdev_name": "Null4", 00:15:24.973 "name": "Null4", 00:15:24.973 "nguid": "85EC281A2B1A43CA99FE9309D0656B1A", 00:15:24.973 "uuid": "85ec281a-2b1a-43ca-99fe-9309d0656b1a" 00:15:24.973 } 00:15:24.973 ] 00:15:24.973 } 00:15:24.973 ] 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:15:24.973 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:24.974 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:15:24.974 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:24.974 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:24.974 rmmod nvme_tcp 00:15:24.974 rmmod nvme_fabrics 00:15:24.974 rmmod nvme_keyring 00:15:24.974 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:24.974 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:15:24.974 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:15:24.974 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 2910499 ']' 00:15:24.974 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 2910499 00:15:24.974 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2910499 ']' 00:15:24.974 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2910499 00:15:24.974 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:15:24.974 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:24.974 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2910499 00:15:25.235 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:25.235 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:25.235 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2910499' 00:15:25.235 killing process with pid 2910499 00:15:25.235 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2910499 00:15:25.235 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2910499 00:15:26.177 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:26.177 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:26.177 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:26.177 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:15:26.177 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:15:26.177 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:26.177 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:15:26.177 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:26.177 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:26.177 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.177 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:26.177 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.090 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:28.090 00:15:28.090 real 0m12.241s 00:15:28.090 user 0m9.803s 00:15:28.090 sys 0m6.051s 00:15:28.090 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:28.090 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.090 ************************************ 00:15:28.090 END TEST nvmf_target_discovery 00:15:28.090 ************************************ 00:15:28.090 14:25:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:28.090 14:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:28.090 14:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:28.090 14:25:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:28.090 ************************************ 00:15:28.090 START TEST nvmf_referrals 00:15:28.090 ************************************ 00:15:28.090 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:28.351 * Looking for test storage... 00:15:28.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:28.351 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:28.351 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:15:28.351 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:28.351 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:28.351 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.351 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.351 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:28.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.352 --rc genhtml_branch_coverage=1 00:15:28.352 --rc genhtml_function_coverage=1 00:15:28.352 --rc genhtml_legend=1 00:15:28.352 --rc geninfo_all_blocks=1 00:15:28.352 --rc geninfo_unexecuted_blocks=1 00:15:28.352 00:15:28.352 ' 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:28.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.352 --rc genhtml_branch_coverage=1 00:15:28.352 --rc genhtml_function_coverage=1 00:15:28.352 --rc genhtml_legend=1 00:15:28.352 --rc geninfo_all_blocks=1 00:15:28.352 --rc geninfo_unexecuted_blocks=1 00:15:28.352 00:15:28.352 ' 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:28.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.352 --rc genhtml_branch_coverage=1 00:15:28.352 --rc genhtml_function_coverage=1 00:15:28.352 --rc genhtml_legend=1 00:15:28.352 --rc geninfo_all_blocks=1 00:15:28.352 --rc geninfo_unexecuted_blocks=1 00:15:28.352 00:15:28.352 ' 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:28.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.352 --rc genhtml_branch_coverage=1 00:15:28.352 --rc genhtml_function_coverage=1 00:15:28.352 --rc genhtml_legend=1 00:15:28.352 --rc geninfo_all_blocks=1 00:15:28.352 --rc geninfo_unexecuted_blocks=1 00:15:28.352 00:15:28.352 ' 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:28.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:28.352 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.353 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:28.353 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:28.353 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:15:28.353 14:25:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:36.493 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:36.493 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:36.493 Found net devices under 0000:31:00.0: cvl_0_0 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:36.493 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:36.494 Found net devices under 0000:31:00.1: cvl_0_1 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:36.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:15:36.494 00:15:36.494 --- 10.0.0.2 ping statistics --- 00:15:36.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.494 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:15:36.494 00:15:36.494 --- 10.0.0.1 ping statistics --- 00:15:36.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.494 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=2915261 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 2915261 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2915261 ']' 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:36.494 14:25:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:36.494 [2024-10-07 14:25:59.649883] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:15:36.494 [2024-10-07 14:25:59.650019] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.494 [2024-10-07 14:25:59.792131] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.494 [2024-10-07 14:25:59.978452] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.494 [2024-10-07 14:25:59.978495] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.494 [2024-10-07 14:25:59.978507] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.494 [2024-10-07 14:25:59.978520] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.494 [2024-10-07 14:25:59.978529] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.494 [2024-10-07 14:25:59.982063] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.494 [2024-10-07 14:25:59.982315] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.494 [2024-10-07 14:25:59.982438] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.494 [2024-10-07 14:25:59.982454] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.754 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:36.754 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:15:36.754 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:36.754 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:36.754 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:36.754 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.754 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:36.754 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.754 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:36.754 [2024-10-07 14:26:00.461605] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:37.014 [2024-10-07 14:26:00.477856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:15:37.014 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:37.015 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:37.015 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.015 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:37.015 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:37.015 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:37.015 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.015 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:37.015 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:37.015 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:15:37.015 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:37.015 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:37.015 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:37.015 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:37.015 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:37.274 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:37.275 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:37.275 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:37.535 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:37.795 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:15:37.795 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:37.795 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:15:37.795 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:15:37.795 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:37.795 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:37.795 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:38.055 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:38.055 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:15:38.055 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:15:38.055 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:38.055 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:38.055 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:38.055 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:38.055 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:38.056 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.056 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:38.056 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.056 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:15:38.056 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:15:38.315 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:38.315 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:38.315 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:38.575 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:15:38.575 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:15:38.575 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:15:38.575 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:38.575 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:38.575 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:38.835 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:38.835 rmmod nvme_tcp 00:15:39.095 rmmod nvme_fabrics 00:15:39.095 rmmod nvme_keyring 00:15:39.095 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:39.095 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:15:39.095 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:15:39.095 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 2915261 ']' 00:15:39.095 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 2915261 00:15:39.095 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2915261 ']' 00:15:39.095 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2915261 00:15:39.095 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:15:39.095 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:39.095 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2915261 00:15:39.095 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:39.095 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:39.095 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2915261' 00:15:39.095 killing process with pid 2915261 00:15:39.095 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2915261 00:15:39.095 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2915261 00:15:40.035 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:40.035 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:40.035 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:40.035 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:15:40.035 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:15:40.035 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:40.035 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:15:40.035 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:40.035 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:40.035 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.035 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.035 14:26:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.945 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:41.945 00:15:41.945 real 0m13.871s 00:15:41.945 user 0m16.262s 00:15:41.945 sys 0m6.630s 00:15:41.945 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:41.945 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:15:41.945 ************************************ 00:15:41.945 END TEST nvmf_referrals 00:15:41.945 ************************************ 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:42.206 ************************************ 00:15:42.206 START TEST nvmf_connect_disconnect 00:15:42.206 ************************************ 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:42.206 * Looking for test storage... 00:15:42.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:42.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.206 --rc genhtml_branch_coverage=1 00:15:42.206 --rc genhtml_function_coverage=1 00:15:42.206 --rc genhtml_legend=1 00:15:42.206 --rc geninfo_all_blocks=1 00:15:42.206 --rc geninfo_unexecuted_blocks=1 00:15:42.206 00:15:42.206 ' 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:42.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.206 --rc genhtml_branch_coverage=1 00:15:42.206 --rc genhtml_function_coverage=1 00:15:42.206 --rc genhtml_legend=1 00:15:42.206 --rc geninfo_all_blocks=1 00:15:42.206 --rc geninfo_unexecuted_blocks=1 00:15:42.206 00:15:42.206 ' 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:42.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.206 --rc genhtml_branch_coverage=1 00:15:42.206 --rc genhtml_function_coverage=1 00:15:42.206 --rc genhtml_legend=1 00:15:42.206 --rc geninfo_all_blocks=1 00:15:42.206 --rc geninfo_unexecuted_blocks=1 00:15:42.206 00:15:42.206 ' 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:42.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.206 --rc genhtml_branch_coverage=1 00:15:42.206 --rc genhtml_function_coverage=1 00:15:42.206 --rc genhtml_legend=1 00:15:42.206 --rc geninfo_all_blocks=1 00:15:42.206 --rc geninfo_unexecuted_blocks=1 00:15:42.206 00:15:42.206 ' 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.206 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:42.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:15:42.467 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:50.604 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:50.604 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:50.604 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:50.605 Found net devices under 0000:31:00.0: cvl_0_0 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:50.605 Found net devices under 0000:31:00.1: cvl_0_1 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:50.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:15:50.605 00:15:50.605 --- 10.0.0.2 ping statistics --- 00:15:50.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.605 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:50.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:15:50.605 00:15:50.605 --- 10.0.0.1 ping statistics --- 00:15:50.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.605 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=2920423 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 2920423 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2920423 ']' 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:50.605 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:50.605 [2024-10-07 14:26:13.475421] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:15:50.605 [2024-10-07 14:26:13.475550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.605 [2024-10-07 14:26:13.614880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:50.605 [2024-10-07 14:26:13.798860] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.605 [2024-10-07 14:26:13.798909] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.605 [2024-10-07 14:26:13.798921] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.605 [2024-10-07 14:26:13.798934] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.605 [2024-10-07 14:26:13.798943] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.605 [2024-10-07 14:26:13.801149] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.605 [2024-10-07 14:26:13.801336] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.605 [2024-10-07 14:26:13.801458] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.605 [2024-10-07 14:26:13.801473] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:50.605 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:50.605 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:15:50.605 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:50.605 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:50.605 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:50.605 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.605 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:50.605 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.605 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:50.605 [2024-10-07 14:26:14.295500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.605 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.605 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:15:50.605 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.605 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:50.866 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.866 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:15:50.866 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:50.866 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.866 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:50.866 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.866 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:50.866 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.866 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:50.866 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.866 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.866 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.866 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:50.866 [2024-10-07 14:26:14.393832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.866 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.866 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:15:50.866 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:15:50.866 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:15:50.866 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:15:53.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:01.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:29.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:32.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:39.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:41.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:48.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:53.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:55.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:00.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:10.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:12.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:15.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:19.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:22.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:26.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:29.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:31.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:34.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:38.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:41.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:45.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:48.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:50.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:52.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:55.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:57.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:59.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:02.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:04.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:07.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:09.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:11.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:14.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:16.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:19.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:21.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:23.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:26.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:28.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:30.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:33.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:35.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:37.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:40.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:42.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:45.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:47.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:47.641 rmmod nvme_tcp 00:19:47.641 rmmod nvme_fabrics 00:19:47.641 rmmod nvme_keyring 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 2920423 ']' 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 2920423 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2920423 ']' 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2920423 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2920423 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2920423' 00:19:47.641 killing process with pid 2920423 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2920423 00:19:47.641 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2920423 00:19:48.214 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:48.214 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:48.214 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:48.214 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:19:48.214 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:19:48.214 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:48.214 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:19:48.214 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:48.214 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:48.214 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.214 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:48.214 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.759 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:50.759 00:19:50.759 real 4m8.235s 00:19:50.759 user 15m39.093s 00:19:50.759 sys 0m30.086s 00:19:50.759 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:50.759 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:19:50.759 ************************************ 00:19:50.759 END TEST nvmf_connect_disconnect 00:19:50.759 ************************************ 00:19:50.759 14:30:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:19:50.759 14:30:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:50.759 14:30:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:50.759 14:30:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:50.759 ************************************ 00:19:50.759 START TEST nvmf_multitarget 00:19:50.759 ************************************ 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:19:50.759 * Looking for test storage... 00:19:50.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:50.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.759 --rc genhtml_branch_coverage=1 00:19:50.759 --rc genhtml_function_coverage=1 00:19:50.759 --rc genhtml_legend=1 00:19:50.759 --rc geninfo_all_blocks=1 00:19:50.759 --rc geninfo_unexecuted_blocks=1 00:19:50.759 00:19:50.759 ' 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:50.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.759 --rc genhtml_branch_coverage=1 00:19:50.759 --rc genhtml_function_coverage=1 00:19:50.759 --rc genhtml_legend=1 00:19:50.759 --rc geninfo_all_blocks=1 00:19:50.759 --rc geninfo_unexecuted_blocks=1 00:19:50.759 00:19:50.759 ' 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:50.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.759 --rc genhtml_branch_coverage=1 00:19:50.759 --rc genhtml_function_coverage=1 00:19:50.759 --rc genhtml_legend=1 00:19:50.759 --rc geninfo_all_blocks=1 00:19:50.759 --rc geninfo_unexecuted_blocks=1 00:19:50.759 00:19:50.759 ' 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:50.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.759 --rc genhtml_branch_coverage=1 00:19:50.759 --rc genhtml_function_coverage=1 00:19:50.759 --rc genhtml_legend=1 00:19:50.759 --rc geninfo_all_blocks=1 00:19:50.759 --rc geninfo_unexecuted_blocks=1 00:19:50.759 00:19:50.759 ' 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:50.759 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:50.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:19:50.760 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:58.901 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:58.901 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:58.901 Found net devices under 0000:31:00.0: cvl_0_0 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:58.901 Found net devices under 0000:31:00.1: cvl_0_1 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.901 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:58.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:19:58.902 00:19:58.902 --- 10.0.0.2 ping statistics --- 00:19:58.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.902 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:58.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:19:58.902 00:19:58.902 --- 10.0.0.1 ping statistics --- 00:19:58.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.902 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=2973008 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 2973008 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2973008 ']' 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:58.902 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:58.902 [2024-10-07 14:30:21.871977] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:19:58.902 [2024-10-07 14:30:21.872115] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.902 [2024-10-07 14:30:22.012141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:58.902 [2024-10-07 14:30:22.196343] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.902 [2024-10-07 14:30:22.196401] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.902 [2024-10-07 14:30:22.196413] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.902 [2024-10-07 14:30:22.196426] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.902 [2024-10-07 14:30:22.196435] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.902 [2024-10-07 14:30:22.198695] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.902 [2024-10-07 14:30:22.198776] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.902 [2024-10-07 14:30:22.198895] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.902 [2024-10-07 14:30:22.198916] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.163 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:59.163 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:19:59.163 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:59.163 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:59.163 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:19:59.163 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.163 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:59.163 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:59.163 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:19:59.163 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:19:59.163 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:19:59.423 "nvmf_tgt_1" 00:19:59.423 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:19:59.423 "nvmf_tgt_2" 00:19:59.423 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:59.423 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:19:59.423 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:19:59.423 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:19:59.684 true 00:19:59.684 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:19:59.684 true 00:19:59.684 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:19:59.684 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:19:59.944 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:19:59.944 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:19:59.944 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:19:59.944 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:59.944 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:59.945 rmmod nvme_tcp 00:19:59.945 rmmod nvme_fabrics 00:19:59.945 rmmod nvme_keyring 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 2973008 ']' 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 2973008 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2973008 ']' 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2973008 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2973008 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2973008' 00:19:59.945 killing process with pid 2973008 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2973008 00:19:59.945 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2973008 00:20:00.885 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:00.885 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:00.885 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:00.885 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:20:00.885 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:20:00.885 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:00.885 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:20:00.885 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:00.885 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:00.885 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.885 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.885 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:03.431 00:20:03.431 real 0m12.493s 00:20:03.431 user 0m11.183s 00:20:03.431 sys 0m6.256s 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:20:03.431 ************************************ 00:20:03.431 END TEST nvmf_multitarget 00:20:03.431 ************************************ 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:03.431 ************************************ 00:20:03.431 START TEST nvmf_rpc 00:20:03.431 ************************************ 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:20:03.431 * Looking for test storage... 00:20:03.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:20:03.431 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:03.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.432 --rc genhtml_branch_coverage=1 00:20:03.432 --rc genhtml_function_coverage=1 00:20:03.432 --rc genhtml_legend=1 00:20:03.432 --rc geninfo_all_blocks=1 00:20:03.432 --rc geninfo_unexecuted_blocks=1 00:20:03.432 00:20:03.432 ' 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:03.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.432 --rc genhtml_branch_coverage=1 00:20:03.432 --rc genhtml_function_coverage=1 00:20:03.432 --rc genhtml_legend=1 00:20:03.432 --rc geninfo_all_blocks=1 00:20:03.432 --rc geninfo_unexecuted_blocks=1 00:20:03.432 00:20:03.432 ' 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:03.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.432 --rc genhtml_branch_coverage=1 00:20:03.432 --rc genhtml_function_coverage=1 00:20:03.432 --rc genhtml_legend=1 00:20:03.432 --rc geninfo_all_blocks=1 00:20:03.432 --rc geninfo_unexecuted_blocks=1 00:20:03.432 00:20:03.432 ' 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:03.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.432 --rc genhtml_branch_coverage=1 00:20:03.432 --rc genhtml_function_coverage=1 00:20:03.432 --rc genhtml_legend=1 00:20:03.432 --rc geninfo_all_blocks=1 00:20:03.432 --rc geninfo_unexecuted_blocks=1 00:20:03.432 00:20:03.432 ' 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:03.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:20:03.432 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:11.577 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.577 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:20:11.577 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:11.577 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:11.578 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:11.578 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:11.578 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:11.578 Found net devices under 0000:31:00.0: cvl_0_0 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:11.578 Found net devices under 0000:31:00.1: cvl_0_1 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:11.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:20:11.578 00:20:11.578 --- 10.0.0.2 ping statistics --- 00:20:11.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.578 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:20:11.578 00:20:11.578 --- 10.0.0.1 ping statistics --- 00:20:11.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.578 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:11.578 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.579 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:11.579 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:11.579 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:20:11.579 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:11.579 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:11.579 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:11.579 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=2977772 00:20:11.579 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 2977772 00:20:11.579 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:11.579 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2977772 ']' 00:20:11.579 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.579 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:11.579 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.579 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:11.579 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:11.579 [2024-10-07 14:30:34.464390] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:20:11.579 [2024-10-07 14:30:34.464520] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.579 [2024-10-07 14:30:34.620331] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:11.579 [2024-10-07 14:30:34.804007] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.579 [2024-10-07 14:30:34.804059] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.579 [2024-10-07 14:30:34.804071] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.579 [2024-10-07 14:30:34.804083] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.579 [2024-10-07 14:30:34.804092] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.579 [2024-10-07 14:30:34.806794] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.579 [2024-10-07 14:30:34.806877] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.579 [2024-10-07 14:30:34.806993] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.579 [2024-10-07 14:30:34.807034] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:20:11.579 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:11.579 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:20:11.579 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:11.579 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:11.579 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:11.579 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.579 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:20:11.579 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.579 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:20:11.840 "tick_rate": 2400000000, 00:20:11.840 "poll_groups": [ 00:20:11.840 { 00:20:11.840 "name": "nvmf_tgt_poll_group_000", 00:20:11.840 "admin_qpairs": 0, 00:20:11.840 "io_qpairs": 0, 00:20:11.840 "current_admin_qpairs": 0, 00:20:11.840 "current_io_qpairs": 0, 00:20:11.840 "pending_bdev_io": 0, 00:20:11.840 "completed_nvme_io": 0, 00:20:11.840 "transports": [] 00:20:11.840 }, 00:20:11.840 { 00:20:11.840 "name": "nvmf_tgt_poll_group_001", 00:20:11.840 "admin_qpairs": 0, 00:20:11.840 "io_qpairs": 0, 00:20:11.840 "current_admin_qpairs": 0, 00:20:11.840 "current_io_qpairs": 0, 00:20:11.840 "pending_bdev_io": 0, 00:20:11.840 "completed_nvme_io": 0, 00:20:11.840 "transports": [] 00:20:11.840 }, 00:20:11.840 { 00:20:11.840 "name": "nvmf_tgt_poll_group_002", 00:20:11.840 "admin_qpairs": 0, 00:20:11.840 "io_qpairs": 0, 00:20:11.840 "current_admin_qpairs": 0, 00:20:11.840 "current_io_qpairs": 0, 00:20:11.840 "pending_bdev_io": 0, 00:20:11.840 "completed_nvme_io": 0, 00:20:11.840 "transports": [] 00:20:11.840 }, 00:20:11.840 { 00:20:11.840 "name": "nvmf_tgt_poll_group_003", 00:20:11.840 "admin_qpairs": 0, 00:20:11.840 "io_qpairs": 0, 00:20:11.840 "current_admin_qpairs": 0, 00:20:11.840 "current_io_qpairs": 0, 00:20:11.840 "pending_bdev_io": 0, 00:20:11.840 "completed_nvme_io": 0, 00:20:11.840 "transports": [] 00:20:11.840 } 00:20:11.840 ] 00:20:11.840 }' 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:11.840 [2024-10-07 14:30:35.394904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:20:11.840 "tick_rate": 2400000000, 00:20:11.840 "poll_groups": [ 00:20:11.840 { 00:20:11.840 "name": "nvmf_tgt_poll_group_000", 00:20:11.840 "admin_qpairs": 0, 00:20:11.840 "io_qpairs": 0, 00:20:11.840 "current_admin_qpairs": 0, 00:20:11.840 "current_io_qpairs": 0, 00:20:11.840 "pending_bdev_io": 0, 00:20:11.840 "completed_nvme_io": 0, 00:20:11.840 "transports": [ 00:20:11.840 { 00:20:11.840 "trtype": "TCP" 00:20:11.840 } 00:20:11.840 ] 00:20:11.840 }, 00:20:11.840 { 00:20:11.840 "name": "nvmf_tgt_poll_group_001", 00:20:11.840 "admin_qpairs": 0, 00:20:11.840 "io_qpairs": 0, 00:20:11.840 "current_admin_qpairs": 0, 00:20:11.840 "current_io_qpairs": 0, 00:20:11.840 "pending_bdev_io": 0, 00:20:11.840 "completed_nvme_io": 0, 00:20:11.840 "transports": [ 00:20:11.840 { 00:20:11.840 "trtype": "TCP" 00:20:11.840 } 00:20:11.840 ] 00:20:11.840 }, 00:20:11.840 { 00:20:11.840 "name": "nvmf_tgt_poll_group_002", 00:20:11.840 "admin_qpairs": 0, 00:20:11.840 "io_qpairs": 0, 00:20:11.840 "current_admin_qpairs": 0, 00:20:11.840 "current_io_qpairs": 0, 00:20:11.840 "pending_bdev_io": 0, 00:20:11.840 "completed_nvme_io": 0, 00:20:11.840 "transports": [ 00:20:11.840 { 00:20:11.840 "trtype": "TCP" 00:20:11.840 } 00:20:11.840 ] 00:20:11.840 }, 00:20:11.840 { 00:20:11.840 "name": "nvmf_tgt_poll_group_003", 00:20:11.840 "admin_qpairs": 0, 00:20:11.840 "io_qpairs": 0, 00:20:11.840 "current_admin_qpairs": 0, 00:20:11.840 "current_io_qpairs": 0, 00:20:11.840 "pending_bdev_io": 0, 00:20:11.840 "completed_nvme_io": 0, 00:20:11.840 "transports": [ 00:20:11.840 { 00:20:11.840 "trtype": "TCP" 00:20:11.840 } 00:20:11.840 ] 00:20:11.840 } 00:20:11.840 ] 00:20:11.840 }' 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:20:11.840 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:11.841 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:20:11.841 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:20:11.841 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:20:11.841 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:20:11.841 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:11.841 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.841 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.101 Malloc1 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.101 [2024-10-07 14:30:35.621836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.101 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:20:12.102 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:12.102 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:20:12.102 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:20:12.102 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:20:12.102 [2024-10-07 14:30:35.659423] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:20:12.102 Failed to write to /dev/nvme-fabrics: Input/output error 00:20:12.102 could not add new controller: failed to write to nvme-fabrics device 00:20:12.102 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:20:12.102 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:12.102 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:12.102 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:12.102 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:12.102 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.102 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.102 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.102 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:14.013 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:20:14.013 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:20:14.013 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:14.013 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:14.013 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:15.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:15.927 [2024-10-07 14:30:39.579826] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:20:15.927 Failed to write to /dev/nvme-fabrics: Input/output error 00:20:15.927 could not add new controller: failed to write to nvme-fabrics device 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.927 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:17.840 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:20:17.840 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:20:17.840 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:17.840 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:17.840 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:19.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:20:19.750 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:20.012 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:20.012 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.012 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:20.012 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.012 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.012 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.012 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:20.012 [2024-10-07 14:30:43.479692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.012 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.012 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:20.012 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.012 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:20.012 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.012 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:20.012 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.012 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:20.012 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.012 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:21.395 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:21.395 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:20:21.395 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:21.395 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:21.395 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:20:23.937 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:23.937 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:23.937 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:23.937 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:23.937 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:23.937 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:20:23.937 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:23.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.938 [2024-10-07 14:30:47.382243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.938 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:25.318 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:25.318 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:20:25.318 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:25.318 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:25.318 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:20:27.859 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:27.859 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:27.859 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:27.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:27.859 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:27.860 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.860 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:27.860 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.860 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.860 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.860 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:27.860 [2024-10-07 14:30:51.296763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.860 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.860 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:27.860 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.860 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:27.860 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.860 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:27.860 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.860 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:27.860 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.860 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:29.243 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:29.243 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:20:29.243 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:29.243 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:29.243 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:20:31.786 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:31.786 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:31.786 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:31.786 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:31.786 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:31.786 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:20:31.786 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:31.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:31.786 [2024-10-07 14:30:55.205459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.786 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:33.172 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:33.172 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:20:33.172 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:33.172 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:33.172 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:20:35.086 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:35.086 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:35.086 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:35.086 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:35.086 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:35.086 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:20:35.086 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:35.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:35.347 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:35.347 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:20:35.347 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:35.347 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:35.347 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:35.347 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:35.347 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:20:35.347 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:35.347 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.347 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:35.347 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.347 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:35.347 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.347 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:35.608 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.608 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:35.608 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:35.608 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.608 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:35.608 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.608 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:35.608 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.609 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:35.609 [2024-10-07 14:30:59.083039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.609 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.609 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:35.609 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.609 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:35.609 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.609 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:35.609 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.609 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:35.609 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.609 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:36.997 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:36.997 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:20:36.997 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:36.997 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:36.997 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:20:39.543 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:39.543 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:39.543 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:39.543 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:39.543 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:39.543 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:20:39.543 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:39.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:39.543 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:39.543 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:20:39.543 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:39.543 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:39.543 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:39.543 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:39.543 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:20:39.543 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.543 [2024-10-07 14:31:03.050908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.543 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.544 [2024-10-07 14:31:03.119116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.544 [2024-10-07 14:31:03.187258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.544 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.805 [2024-10-07 14:31:03.255474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.805 [2024-10-07 14:31:03.323724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.805 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:20:39.805 "tick_rate": 2400000000, 00:20:39.805 "poll_groups": [ 00:20:39.805 { 00:20:39.805 "name": "nvmf_tgt_poll_group_000", 00:20:39.805 "admin_qpairs": 0, 00:20:39.805 "io_qpairs": 224, 00:20:39.805 "current_admin_qpairs": 0, 00:20:39.805 "current_io_qpairs": 0, 00:20:39.805 "pending_bdev_io": 0, 00:20:39.805 "completed_nvme_io": 225, 00:20:39.805 "transports": [ 00:20:39.805 { 00:20:39.805 "trtype": "TCP" 00:20:39.805 } 00:20:39.805 ] 00:20:39.805 }, 00:20:39.805 { 00:20:39.805 "name": "nvmf_tgt_poll_group_001", 00:20:39.805 "admin_qpairs": 1, 00:20:39.805 "io_qpairs": 223, 00:20:39.805 "current_admin_qpairs": 0, 00:20:39.805 "current_io_qpairs": 0, 00:20:39.805 "pending_bdev_io": 0, 00:20:39.805 "completed_nvme_io": 224, 00:20:39.805 "transports": [ 00:20:39.805 { 00:20:39.805 "trtype": "TCP" 00:20:39.805 } 00:20:39.805 ] 00:20:39.805 }, 00:20:39.805 { 00:20:39.805 "name": "nvmf_tgt_poll_group_002", 00:20:39.805 "admin_qpairs": 6, 00:20:39.805 "io_qpairs": 218, 00:20:39.805 "current_admin_qpairs": 0, 00:20:39.805 "current_io_qpairs": 0, 00:20:39.805 "pending_bdev_io": 0, 00:20:39.805 "completed_nvme_io": 269, 00:20:39.805 "transports": [ 00:20:39.805 { 00:20:39.805 "trtype": "TCP" 00:20:39.805 } 00:20:39.805 ] 00:20:39.805 }, 00:20:39.805 { 00:20:39.805 "name": "nvmf_tgt_poll_group_003", 00:20:39.806 "admin_qpairs": 0, 00:20:39.806 "io_qpairs": 224, 00:20:39.806 "current_admin_qpairs": 0, 00:20:39.806 "current_io_qpairs": 0, 00:20:39.806 "pending_bdev_io": 0, 00:20:39.806 "completed_nvme_io": 521, 00:20:39.806 "transports": [ 00:20:39.806 { 00:20:39.806 "trtype": "TCP" 00:20:39.806 } 00:20:39.806 ] 00:20:39.806 } 00:20:39.806 ] 00:20:39.806 }' 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.806 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:39.806 rmmod nvme_tcp 00:20:40.066 rmmod nvme_fabrics 00:20:40.066 rmmod nvme_keyring 00:20:40.066 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:40.066 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:20:40.066 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:20:40.066 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 2977772 ']' 00:20:40.066 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 2977772 00:20:40.066 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2977772 ']' 00:20:40.066 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2977772 00:20:40.066 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:20:40.066 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:40.066 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2977772 00:20:40.066 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:40.066 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:40.066 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2977772' 00:20:40.066 killing process with pid 2977772 00:20:40.066 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2977772 00:20:40.066 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2977772 00:20:41.009 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:41.009 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:41.009 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:41.009 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:20:41.009 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:20:41.009 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:41.009 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:20:41.009 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:41.009 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:41.009 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.009 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.009 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.555 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:43.555 00:20:43.555 real 0m40.090s 00:20:43.555 user 1m59.860s 00:20:43.555 sys 0m8.254s 00:20:43.555 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:43.555 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:43.555 ************************************ 00:20:43.555 END TEST nvmf_rpc 00:20:43.555 ************************************ 00:20:43.555 14:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:20:43.555 14:31:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:43.555 14:31:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:43.555 14:31:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:43.555 ************************************ 00:20:43.555 START TEST nvmf_invalid 00:20:43.555 ************************************ 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:20:43.556 * Looking for test storage... 00:20:43.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:43.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.556 --rc genhtml_branch_coverage=1 00:20:43.556 --rc genhtml_function_coverage=1 00:20:43.556 --rc genhtml_legend=1 00:20:43.556 --rc geninfo_all_blocks=1 00:20:43.556 --rc geninfo_unexecuted_blocks=1 00:20:43.556 00:20:43.556 ' 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:43.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.556 --rc genhtml_branch_coverage=1 00:20:43.556 --rc genhtml_function_coverage=1 00:20:43.556 --rc genhtml_legend=1 00:20:43.556 --rc geninfo_all_blocks=1 00:20:43.556 --rc geninfo_unexecuted_blocks=1 00:20:43.556 00:20:43.556 ' 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:43.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.556 --rc genhtml_branch_coverage=1 00:20:43.556 --rc genhtml_function_coverage=1 00:20:43.556 --rc genhtml_legend=1 00:20:43.556 --rc geninfo_all_blocks=1 00:20:43.556 --rc geninfo_unexecuted_blocks=1 00:20:43.556 00:20:43.556 ' 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:43.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.556 --rc genhtml_branch_coverage=1 00:20:43.556 --rc genhtml_function_coverage=1 00:20:43.556 --rc genhtml_legend=1 00:20:43.556 --rc geninfo_all_blocks=1 00:20:43.556 --rc geninfo_unexecuted_blocks=1 00:20:43.556 00:20:43.556 ' 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:43.556 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:43.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:43.556 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:20:43.557 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:20:43.557 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:20:43.557 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:43.557 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.557 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:43.557 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:43.557 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:43.557 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.557 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.557 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.557 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:43.557 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:43.557 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:20:43.557 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:51.702 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:51.702 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:51.702 Found net devices under 0000:31:00.0: cvl_0_0 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:51.702 Found net devices under 0000:31:00.1: cvl_0_1 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:51.702 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:51.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:20:51.703 00:20:51.703 --- 10.0.0.2 ping statistics --- 00:20:51.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.703 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:51.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:20:51.703 00:20:51.703 --- 10.0.0.1 ping statistics --- 00:20:51.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.703 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=2988020 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 2988020 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2988020 ']' 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:51.703 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:51.703 [2024-10-07 14:31:14.558088] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:20:51.703 [2024-10-07 14:31:14.558221] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.703 [2024-10-07 14:31:14.698472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:51.703 [2024-10-07 14:31:14.886436] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.703 [2024-10-07 14:31:14.886480] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.703 [2024-10-07 14:31:14.886492] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.703 [2024-10-07 14:31:14.886505] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.703 [2024-10-07 14:31:14.886514] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.703 [2024-10-07 14:31:14.888770] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.703 [2024-10-07 14:31:14.888853] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.703 [2024-10-07 14:31:14.888986] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.703 [2024-10-07 14:31:14.889024] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:20:51.703 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:51.703 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:20:51.703 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:51.703 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:51.703 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:51.703 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.703 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:51.703 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15216 00:20:51.964 [2024-10-07 14:31:15.504722] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:20:51.964 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:20:51.964 { 00:20:51.964 "nqn": "nqn.2016-06.io.spdk:cnode15216", 00:20:51.964 "tgt_name": "foobar", 00:20:51.964 "method": "nvmf_create_subsystem", 00:20:51.964 "req_id": 1 00:20:51.964 } 00:20:51.964 Got JSON-RPC error response 00:20:51.964 response: 00:20:51.964 { 00:20:51.964 "code": -32603, 00:20:51.964 "message": "Unable to find target foobar" 00:20:51.964 }' 00:20:51.964 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:20:51.964 { 00:20:51.964 "nqn": "nqn.2016-06.io.spdk:cnode15216", 00:20:51.964 "tgt_name": "foobar", 00:20:51.964 "method": "nvmf_create_subsystem", 00:20:51.964 "req_id": 1 00:20:51.964 } 00:20:51.964 Got JSON-RPC error response 00:20:51.964 response: 00:20:51.964 { 00:20:51.964 "code": -32603, 00:20:51.964 "message": "Unable to find target foobar" 00:20:51.964 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:20:51.964 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:20:51.964 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5433 00:20:52.225 [2024-10-07 14:31:15.689385] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5433: invalid serial number 'SPDKISFASTANDAWESOME' 00:20:52.225 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:20:52.225 { 00:20:52.225 "nqn": "nqn.2016-06.io.spdk:cnode5433", 00:20:52.225 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:20:52.225 "method": "nvmf_create_subsystem", 00:20:52.225 "req_id": 1 00:20:52.225 } 00:20:52.225 Got JSON-RPC error response 00:20:52.225 response: 00:20:52.225 { 00:20:52.225 "code": -32602, 00:20:52.225 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:20:52.225 }' 00:20:52.225 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:20:52.225 { 00:20:52.225 "nqn": "nqn.2016-06.io.spdk:cnode5433", 00:20:52.225 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:20:52.225 "method": "nvmf_create_subsystem", 00:20:52.225 "req_id": 1 00:20:52.225 } 00:20:52.225 Got JSON-RPC error response 00:20:52.225 response: 00:20:52.225 { 00:20:52.225 "code": -32602, 00:20:52.225 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:20:52.225 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:20:52.225 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:20:52.225 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28154 00:20:52.225 [2024-10-07 14:31:15.882011] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28154: invalid model number 'SPDK_Controller' 00:20:52.225 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:20:52.225 { 00:20:52.225 "nqn": "nqn.2016-06.io.spdk:cnode28154", 00:20:52.225 "model_number": "SPDK_Controller\u001f", 00:20:52.225 "method": "nvmf_create_subsystem", 00:20:52.225 "req_id": 1 00:20:52.225 } 00:20:52.225 Got JSON-RPC error response 00:20:52.225 response: 00:20:52.225 { 00:20:52.225 "code": -32602, 00:20:52.225 "message": "Invalid MN SPDK_Controller\u001f" 00:20:52.225 }' 00:20:52.225 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:20:52.225 { 00:20:52.225 "nqn": "nqn.2016-06.io.spdk:cnode28154", 00:20:52.225 "model_number": "SPDK_Controller\u001f", 00:20:52.225 "method": "nvmf_create_subsystem", 00:20:52.225 "req_id": 1 00:20:52.225 } 00:20:52.225 Got JSON-RPC error response 00:20:52.225 response: 00:20:52.225 { 00:20:52.225 "code": -32602, 00:20:52.225 "message": "Invalid MN SPDK_Controller\u001f" 00:20:52.225 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:20:52.225 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:20:52.225 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:20:52.225 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:20:52.226 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:20:52.226 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:20:52.226 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:20:52.226 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.226 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:20:52.226 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:20:52.226 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:20:52.226 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.226 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.226 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:20:52.226 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:20:52.487 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:20:52.487 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.487 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.487 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:20:52.487 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:20:52.487 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:20:52.487 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ F == \- ]] 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'F|h'\''s1>" hT/[PSp@aM|u' 00:20:52.488 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'F|h'\''s1>" hT/[PSp@aM|u' nqn.2016-06.io.spdk:cnode28916 00:20:52.750 [2024-10-07 14:31:16.231184] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28916: invalid serial number 'F|h's1>" hT/[PSp@aM|u' 00:20:52.750 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:20:52.750 { 00:20:52.750 "nqn": "nqn.2016-06.io.spdk:cnode28916", 00:20:52.750 "serial_number": "F|h'\''s1>\" hT/[PSp@aM|u", 00:20:52.750 "method": "nvmf_create_subsystem", 00:20:52.750 "req_id": 1 00:20:52.750 } 00:20:52.750 Got JSON-RPC error response 00:20:52.750 response: 00:20:52.750 { 00:20:52.750 "code": -32602, 00:20:52.750 "message": "Invalid SN F|h'\''s1>\" hT/[PSp@aM|u" 00:20:52.750 }' 00:20:52.750 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:20:52.750 { 00:20:52.750 "nqn": "nqn.2016-06.io.spdk:cnode28916", 00:20:52.750 "serial_number": "F|h's1>\" hT/[PSp@aM|u", 00:20:52.750 "method": "nvmf_create_subsystem", 00:20:52.750 "req_id": 1 00:20:52.750 } 00:20:52.750 Got JSON-RPC error response 00:20:52.750 response: 00:20:52.750 { 00:20:52.750 "code": -32602, 00:20:52.750 "message": "Invalid SN F|h's1>\" hT/[PSp@aM|u" 00:20:52.750 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:20:52.750 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:20:52.750 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:20:52.750 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:20:52.750 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:20:52.750 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:20:52.750 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:20:52.750 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.750 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:20:52.750 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:20:52.750 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:20:52.750 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.750 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.750 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:20:52.750 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:20:52.750 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:20:52.750 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:52.751 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:20:53.014 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ' == \- ]] 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ''\''x6qMsK3ds-$_i8f|F-3?pRG;6sq3}3\'\'':Y&tCsQ' 00:20:53.015 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ''\''x6qMsK3ds-$_i8f|F-3?pRG;6sq3}3\'\'':Y&tCsQ' nqn.2016-06.io.spdk:cnode23130 00:20:53.276 [2024-10-07 14:31:16.740951] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23130: invalid model number ''x6qMsK3ds-$_i8f|F-3?pRG;6sq3}3\':Y&tCsQ' 00:20:53.276 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:20:53.276 { 00:20:53.276 "nqn": "nqn.2016-06.io.spdk:cnode23130", 00:20:53.276 "model_number": "'\''x\u007f6qMsK3ds-$_i8f|F-3?pRG;6sq3}3\\'\'':Y&tCsQ", 00:20:53.276 "method": "nvmf_create_subsystem", 00:20:53.276 "req_id": 1 00:20:53.276 } 00:20:53.276 Got JSON-RPC error response 00:20:53.276 response: 00:20:53.276 { 00:20:53.276 "code": -32602, 00:20:53.276 "message": "Invalid MN '\''x\u007f6qMsK3ds-$_i8f|F-3?pRG;6sq3}3\\'\'':Y&tCsQ" 00:20:53.276 }' 00:20:53.276 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:20:53.276 { 00:20:53.276 "nqn": "nqn.2016-06.io.spdk:cnode23130", 00:20:53.276 "model_number": "'x\u007f6qMsK3ds-$_i8f|F-3?pRG;6sq3}3\\':Y&tCsQ", 00:20:53.276 "method": "nvmf_create_subsystem", 00:20:53.276 "req_id": 1 00:20:53.276 } 00:20:53.276 Got JSON-RPC error response 00:20:53.276 response: 00:20:53.276 { 00:20:53.276 "code": -32602, 00:20:53.276 "message": "Invalid MN 'x\u007f6qMsK3ds-$_i8f|F-3?pRG;6sq3}3\\':Y&tCsQ" 00:20:53.276 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:20:53.276 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:20:53.276 [2024-10-07 14:31:16.921651] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.276 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:20:53.538 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:20:53.538 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:20:53.538 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:20:53.538 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:20:53.538 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:20:53.800 [2024-10-07 14:31:17.306899] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:20:53.800 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:20:53.800 { 00:20:53.800 "nqn": "nqn.2016-06.io.spdk:cnode", 00:20:53.800 "listen_address": { 00:20:53.800 "trtype": "tcp", 00:20:53.800 "traddr": "", 00:20:53.800 "trsvcid": "4421" 00:20:53.800 }, 00:20:53.800 "method": "nvmf_subsystem_remove_listener", 00:20:53.800 "req_id": 1 00:20:53.800 } 00:20:53.800 Got JSON-RPC error response 00:20:53.800 response: 00:20:53.800 { 00:20:53.800 "code": -32602, 00:20:53.800 "message": "Invalid parameters" 00:20:53.800 }' 00:20:53.800 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:20:53.800 { 00:20:53.800 "nqn": "nqn.2016-06.io.spdk:cnode", 00:20:53.800 "listen_address": { 00:20:53.800 "trtype": "tcp", 00:20:53.800 "traddr": "", 00:20:53.800 "trsvcid": "4421" 00:20:53.800 }, 00:20:53.800 "method": "nvmf_subsystem_remove_listener", 00:20:53.800 "req_id": 1 00:20:53.800 } 00:20:53.800 Got JSON-RPC error response 00:20:53.800 response: 00:20:53.800 { 00:20:53.800 "code": -32602, 00:20:53.800 "message": "Invalid parameters" 00:20:53.800 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:20:53.800 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9088 -i 0 00:20:53.800 [2024-10-07 14:31:17.495498] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9088: invalid cntlid range [0-65519] 00:20:54.061 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:20:54.061 { 00:20:54.061 "nqn": "nqn.2016-06.io.spdk:cnode9088", 00:20:54.061 "min_cntlid": 0, 00:20:54.061 "method": "nvmf_create_subsystem", 00:20:54.061 "req_id": 1 00:20:54.061 } 00:20:54.061 Got JSON-RPC error response 00:20:54.061 response: 00:20:54.061 { 00:20:54.061 "code": -32602, 00:20:54.061 "message": "Invalid cntlid range [0-65519]" 00:20:54.061 }' 00:20:54.061 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:20:54.061 { 00:20:54.061 "nqn": "nqn.2016-06.io.spdk:cnode9088", 00:20:54.061 "min_cntlid": 0, 00:20:54.061 "method": "nvmf_create_subsystem", 00:20:54.061 "req_id": 1 00:20:54.061 } 00:20:54.061 Got JSON-RPC error response 00:20:54.061 response: 00:20:54.061 { 00:20:54.061 "code": -32602, 00:20:54.061 "message": "Invalid cntlid range [0-65519]" 00:20:54.061 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:54.061 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13631 -i 65520 00:20:54.061 [2024-10-07 14:31:17.684141] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13631: invalid cntlid range [65520-65519] 00:20:54.061 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:20:54.061 { 00:20:54.061 "nqn": "nqn.2016-06.io.spdk:cnode13631", 00:20:54.061 "min_cntlid": 65520, 00:20:54.061 "method": "nvmf_create_subsystem", 00:20:54.061 "req_id": 1 00:20:54.061 } 00:20:54.061 Got JSON-RPC error response 00:20:54.061 response: 00:20:54.061 { 00:20:54.061 "code": -32602, 00:20:54.061 "message": "Invalid cntlid range [65520-65519]" 00:20:54.061 }' 00:20:54.061 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:20:54.061 { 00:20:54.061 "nqn": "nqn.2016-06.io.spdk:cnode13631", 00:20:54.061 "min_cntlid": 65520, 00:20:54.061 "method": "nvmf_create_subsystem", 00:20:54.061 "req_id": 1 00:20:54.061 } 00:20:54.061 Got JSON-RPC error response 00:20:54.061 response: 00:20:54.061 { 00:20:54.061 "code": -32602, 00:20:54.061 "message": "Invalid cntlid range [65520-65519]" 00:20:54.061 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:54.061 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17792 -I 0 00:20:54.322 [2024-10-07 14:31:17.872782] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17792: invalid cntlid range [1-0] 00:20:54.322 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:20:54.322 { 00:20:54.322 "nqn": "nqn.2016-06.io.spdk:cnode17792", 00:20:54.322 "max_cntlid": 0, 00:20:54.322 "method": "nvmf_create_subsystem", 00:20:54.322 "req_id": 1 00:20:54.322 } 00:20:54.322 Got JSON-RPC error response 00:20:54.322 response: 00:20:54.322 { 00:20:54.322 "code": -32602, 00:20:54.322 "message": "Invalid cntlid range [1-0]" 00:20:54.322 }' 00:20:54.322 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:20:54.322 { 00:20:54.322 "nqn": "nqn.2016-06.io.spdk:cnode17792", 00:20:54.322 "max_cntlid": 0, 00:20:54.322 "method": "nvmf_create_subsystem", 00:20:54.322 "req_id": 1 00:20:54.322 } 00:20:54.322 Got JSON-RPC error response 00:20:54.322 response: 00:20:54.322 { 00:20:54.322 "code": -32602, 00:20:54.322 "message": "Invalid cntlid range [1-0]" 00:20:54.322 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:54.322 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10019 -I 65520 00:20:54.583 [2024-10-07 14:31:18.057421] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10019: invalid cntlid range [1-65520] 00:20:54.583 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:20:54.583 { 00:20:54.583 "nqn": "nqn.2016-06.io.spdk:cnode10019", 00:20:54.583 "max_cntlid": 65520, 00:20:54.583 "method": "nvmf_create_subsystem", 00:20:54.583 "req_id": 1 00:20:54.583 } 00:20:54.583 Got JSON-RPC error response 00:20:54.583 response: 00:20:54.583 { 00:20:54.583 "code": -32602, 00:20:54.583 "message": "Invalid cntlid range [1-65520]" 00:20:54.583 }' 00:20:54.583 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:20:54.583 { 00:20:54.583 "nqn": "nqn.2016-06.io.spdk:cnode10019", 00:20:54.583 "max_cntlid": 65520, 00:20:54.583 "method": "nvmf_create_subsystem", 00:20:54.583 "req_id": 1 00:20:54.583 } 00:20:54.583 Got JSON-RPC error response 00:20:54.583 response: 00:20:54.583 { 00:20:54.583 "code": -32602, 00:20:54.583 "message": "Invalid cntlid range [1-65520]" 00:20:54.583 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:54.583 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28275 -i 6 -I 5 00:20:54.583 [2024-10-07 14:31:18.242019] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28275: invalid cntlid range [6-5] 00:20:54.583 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:20:54.583 { 00:20:54.583 "nqn": "nqn.2016-06.io.spdk:cnode28275", 00:20:54.583 "min_cntlid": 6, 00:20:54.583 "max_cntlid": 5, 00:20:54.583 "method": "nvmf_create_subsystem", 00:20:54.583 "req_id": 1 00:20:54.583 } 00:20:54.583 Got JSON-RPC error response 00:20:54.583 response: 00:20:54.583 { 00:20:54.583 "code": -32602, 00:20:54.583 "message": "Invalid cntlid range [6-5]" 00:20:54.583 }' 00:20:54.583 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:20:54.583 { 00:20:54.583 "nqn": "nqn.2016-06.io.spdk:cnode28275", 00:20:54.583 "min_cntlid": 6, 00:20:54.583 "max_cntlid": 5, 00:20:54.583 "method": "nvmf_create_subsystem", 00:20:54.583 "req_id": 1 00:20:54.583 } 00:20:54.583 Got JSON-RPC error response 00:20:54.583 response: 00:20:54.583 { 00:20:54.583 "code": -32602, 00:20:54.583 "message": "Invalid cntlid range [6-5]" 00:20:54.583 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:20:54.583 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:20:54.867 { 00:20:54.867 "name": "foobar", 00:20:54.867 "method": "nvmf_delete_target", 00:20:54.867 "req_id": 1 00:20:54.867 } 00:20:54.867 Got JSON-RPC error response 00:20:54.867 response: 00:20:54.867 { 00:20:54.867 "code": -32602, 00:20:54.867 "message": "The specified target doesn'\''t exist, cannot delete it." 00:20:54.867 }' 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:20:54.867 { 00:20:54.867 "name": "foobar", 00:20:54.867 "method": "nvmf_delete_target", 00:20:54.867 "req_id": 1 00:20:54.867 } 00:20:54.867 Got JSON-RPC error response 00:20:54.867 response: 00:20:54.867 { 00:20:54.867 "code": -32602, 00:20:54.867 "message": "The specified target doesn't exist, cannot delete it." 00:20:54.867 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:54.867 rmmod nvme_tcp 00:20:54.867 rmmod nvme_fabrics 00:20:54.867 rmmod nvme_keyring 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 2988020 ']' 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 2988020 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2988020 ']' 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2988020 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2988020 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2988020' 00:20:54.867 killing process with pid 2988020 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2988020 00:20:54.867 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2988020 00:20:55.809 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:55.809 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:55.809 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:55.809 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:20:55.809 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:20:55.809 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:55.809 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:20:55.809 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:55.809 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:55.809 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.809 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.809 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:58.354 00:20:58.354 real 0m14.719s 00:20:58.354 user 0m21.882s 00:20:58.354 sys 0m6.703s 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:20:58.354 ************************************ 00:20:58.354 END TEST nvmf_invalid 00:20:58.354 ************************************ 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:58.354 ************************************ 00:20:58.354 START TEST nvmf_connect_stress 00:20:58.354 ************************************ 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:20:58.354 * Looking for test storage... 00:20:58.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:58.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.354 --rc genhtml_branch_coverage=1 00:20:58.354 --rc genhtml_function_coverage=1 00:20:58.354 --rc genhtml_legend=1 00:20:58.354 --rc geninfo_all_blocks=1 00:20:58.354 --rc geninfo_unexecuted_blocks=1 00:20:58.354 00:20:58.354 ' 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:58.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.354 --rc genhtml_branch_coverage=1 00:20:58.354 --rc genhtml_function_coverage=1 00:20:58.354 --rc genhtml_legend=1 00:20:58.354 --rc geninfo_all_blocks=1 00:20:58.354 --rc geninfo_unexecuted_blocks=1 00:20:58.354 00:20:58.354 ' 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:58.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.354 --rc genhtml_branch_coverage=1 00:20:58.354 --rc genhtml_function_coverage=1 00:20:58.354 --rc genhtml_legend=1 00:20:58.354 --rc geninfo_all_blocks=1 00:20:58.354 --rc geninfo_unexecuted_blocks=1 00:20:58.354 00:20:58.354 ' 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:58.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.354 --rc genhtml_branch_coverage=1 00:20:58.354 --rc genhtml_function_coverage=1 00:20:58.354 --rc genhtml_legend=1 00:20:58.354 --rc geninfo_all_blocks=1 00:20:58.354 --rc geninfo_unexecuted_blocks=1 00:20:58.354 00:20:58.354 ' 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.354 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:58.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:20:58.355 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:06.494 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:06.494 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:06.494 Found net devices under 0000:31:00.0: cvl_0_0 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:06.494 Found net devices under 0000:31:00.1: cvl_0_1 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:06.494 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:06.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:21:06.495 00:21:06.495 --- 10.0.0.2 ping statistics --- 00:21:06.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.495 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:21:06.495 00:21:06.495 --- 10.0.0.1 ping statistics --- 00:21:06.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.495 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=2993454 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 2993454 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2993454 ']' 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:06.495 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:06.495 [2024-10-07 14:31:29.483593] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:21:06.495 [2024-10-07 14:31:29.483728] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.495 [2024-10-07 14:31:29.641659] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:06.495 [2024-10-07 14:31:29.866772] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.495 [2024-10-07 14:31:29.866851] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.495 [2024-10-07 14:31:29.866865] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.495 [2024-10-07 14:31:29.866878] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.495 [2024-10-07 14:31:29.866888] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.495 [2024-10-07 14:31:29.869074] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.495 [2024-10-07 14:31:29.869263] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.495 [2024-10-07 14:31:29.869287] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:06.756 [2024-10-07 14:31:30.301123] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:06.756 [2024-10-07 14:31:30.326951] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:06.756 NULL1 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2993620 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.756 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:07.327 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.327 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:07.327 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:07.327 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.327 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:07.588 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.588 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:07.588 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:07.588 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.588 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:07.848 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.848 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:07.848 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:07.848 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.848 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:08.108 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.108 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:08.108 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:08.108 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.108 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:08.368 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.368 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:08.368 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:08.368 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.368 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:08.938 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.939 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:08.939 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:08.939 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.939 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:09.199 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.199 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:09.199 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:09.199 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.199 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:09.459 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.459 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:09.459 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:09.459 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.459 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:09.719 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.719 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:09.719 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:09.719 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.719 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:10.290 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.290 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:10.290 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:10.290 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.290 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:10.552 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.552 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:10.552 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:10.552 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.552 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:10.836 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.836 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:10.836 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:10.836 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.836 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:11.123 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.123 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:11.123 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:11.123 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.123 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:11.416 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.416 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:11.416 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:11.416 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.416 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:11.712 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.712 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:11.712 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:11.712 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.712 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:11.980 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.980 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:11.980 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:11.980 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.980 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:12.551 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.551 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:12.551 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:12.551 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.551 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:12.812 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.812 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:12.812 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:12.812 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.812 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:13.074 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.074 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:13.074 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:13.074 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.074 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:13.335 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.335 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:13.335 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:13.335 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.335 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:13.596 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.596 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:13.596 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:13.596 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.596 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:14.167 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.167 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:14.167 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:14.167 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.167 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:14.429 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.429 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:14.429 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:14.429 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.429 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:14.690 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.690 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:14.690 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:14.690 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.690 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:14.949 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.949 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:14.949 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:14.949 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.949 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:15.210 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.210 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:15.210 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:15.210 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.210 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:15.781 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.781 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:15.781 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:15.781 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.781 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:16.042 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.042 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:16.042 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:16.042 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.042 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:16.303 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.303 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:16.303 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:16.303 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.303 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:16.564 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.564 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:16.564 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:16.564 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.564 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:16.825 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2993620 00:21:17.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2993620) - No such process 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2993620 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:17.085 rmmod nvme_tcp 00:21:17.085 rmmod nvme_fabrics 00:21:17.085 rmmod nvme_keyring 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 2993454 ']' 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 2993454 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2993454 ']' 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2993454 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2993454 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2993454' 00:21:17.085 killing process with pid 2993454 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2993454 00:21:17.085 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2993454 00:21:17.656 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:17.656 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:17.656 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:17.656 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:21:17.656 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:21:17.656 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:17.656 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:21:17.656 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:17.656 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:17.656 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.656 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.656 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:20.204 00:21:20.204 real 0m21.815s 00:21:20.204 user 0m42.755s 00:21:20.204 sys 0m9.270s 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:20.204 ************************************ 00:21:20.204 END TEST nvmf_connect_stress 00:21:20.204 ************************************ 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:20.204 ************************************ 00:21:20.204 START TEST nvmf_fused_ordering 00:21:20.204 ************************************ 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:21:20.204 * Looking for test storage... 00:21:20.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:20.204 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:20.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.204 --rc genhtml_branch_coverage=1 00:21:20.204 --rc genhtml_function_coverage=1 00:21:20.204 --rc genhtml_legend=1 00:21:20.204 --rc geninfo_all_blocks=1 00:21:20.204 --rc geninfo_unexecuted_blocks=1 00:21:20.204 00:21:20.204 ' 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:20.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.205 --rc genhtml_branch_coverage=1 00:21:20.205 --rc genhtml_function_coverage=1 00:21:20.205 --rc genhtml_legend=1 00:21:20.205 --rc geninfo_all_blocks=1 00:21:20.205 --rc geninfo_unexecuted_blocks=1 00:21:20.205 00:21:20.205 ' 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:20.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.205 --rc genhtml_branch_coverage=1 00:21:20.205 --rc genhtml_function_coverage=1 00:21:20.205 --rc genhtml_legend=1 00:21:20.205 --rc geninfo_all_blocks=1 00:21:20.205 --rc geninfo_unexecuted_blocks=1 00:21:20.205 00:21:20.205 ' 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:20.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.205 --rc genhtml_branch_coverage=1 00:21:20.205 --rc genhtml_function_coverage=1 00:21:20.205 --rc genhtml_legend=1 00:21:20.205 --rc geninfo_all_blocks=1 00:21:20.205 --rc geninfo_unexecuted_blocks=1 00:21:20.205 00:21:20.205 ' 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:20.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:21:20.205 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:28.350 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:28.350 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:28.350 Found net devices under 0000:31:00.0: cvl_0_0 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:28.350 Found net devices under 0000:31:00.1: cvl_0_1 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.350 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:28.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:21:28.351 00:21:28.351 --- 10.0.0.2 ping statistics --- 00:21:28.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.351 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:21:28.351 00:21:28.351 --- 10.0.0.1 ping statistics --- 00:21:28.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.351 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=3000063 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 3000063 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 3000063 ']' 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:28.351 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:28.351 [2024-10-07 14:31:51.360875] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:21:28.351 [2024-10-07 14:31:51.361011] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.351 [2024-10-07 14:31:51.516494] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.351 [2024-10-07 14:31:51.748092] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.351 [2024-10-07 14:31:51.748160] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.351 [2024-10-07 14:31:51.748173] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.351 [2024-10-07 14:31:51.748187] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.351 [2024-10-07 14:31:51.748197] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.351 [2024-10-07 14:31:51.749614] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.612 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:28.612 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:21:28.612 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:28.612 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:28.612 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:28.612 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.612 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:28.613 [2024-10-07 14:31:52.170346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:28.613 [2024-10-07 14:31:52.194717] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:28.613 NULL1 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.613 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:28.613 [2024-10-07 14:31:52.296588] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:21:28.613 [2024-10-07 14:31:52.296675] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3000203 ] 00:21:29.184 Attached to nqn.2016-06.io.spdk:cnode1 00:21:29.184 Namespace ID: 1 size: 1GB 00:21:29.184 fused_ordering(0) 00:21:29.184 fused_ordering(1) 00:21:29.184 fused_ordering(2) 00:21:29.185 fused_ordering(3) 00:21:29.185 fused_ordering(4) 00:21:29.185 fused_ordering(5) 00:21:29.185 fused_ordering(6) 00:21:29.185 fused_ordering(7) 00:21:29.185 fused_ordering(8) 00:21:29.185 fused_ordering(9) 00:21:29.185 fused_ordering(10) 00:21:29.185 fused_ordering(11) 00:21:29.185 fused_ordering(12) 00:21:29.185 fused_ordering(13) 00:21:29.185 fused_ordering(14) 00:21:29.185 fused_ordering(15) 00:21:29.185 fused_ordering(16) 00:21:29.185 fused_ordering(17) 00:21:29.185 fused_ordering(18) 00:21:29.185 fused_ordering(19) 00:21:29.185 fused_ordering(20) 00:21:29.185 fused_ordering(21) 00:21:29.185 fused_ordering(22) 00:21:29.185 fused_ordering(23) 00:21:29.185 fused_ordering(24) 00:21:29.185 fused_ordering(25) 00:21:29.185 fused_ordering(26) 00:21:29.185 fused_ordering(27) 00:21:29.185 fused_ordering(28) 00:21:29.185 fused_ordering(29) 00:21:29.185 fused_ordering(30) 00:21:29.185 fused_ordering(31) 00:21:29.185 fused_ordering(32) 00:21:29.185 fused_ordering(33) 00:21:29.185 fused_ordering(34) 00:21:29.185 fused_ordering(35) 00:21:29.185 fused_ordering(36) 00:21:29.185 fused_ordering(37) 00:21:29.185 fused_ordering(38) 00:21:29.185 fused_ordering(39) 00:21:29.185 fused_ordering(40) 00:21:29.185 fused_ordering(41) 00:21:29.185 fused_ordering(42) 00:21:29.185 fused_ordering(43) 00:21:29.185 fused_ordering(44) 00:21:29.185 fused_ordering(45) 00:21:29.185 fused_ordering(46) 00:21:29.185 fused_ordering(47) 00:21:29.185 fused_ordering(48) 00:21:29.185 fused_ordering(49) 00:21:29.185 fused_ordering(50) 00:21:29.185 fused_ordering(51) 00:21:29.185 fused_ordering(52) 00:21:29.185 fused_ordering(53) 00:21:29.185 fused_ordering(54) 00:21:29.185 fused_ordering(55) 00:21:29.185 fused_ordering(56) 00:21:29.185 fused_ordering(57) 00:21:29.185 fused_ordering(58) 00:21:29.185 fused_ordering(59) 00:21:29.185 fused_ordering(60) 00:21:29.185 fused_ordering(61) 00:21:29.185 fused_ordering(62) 00:21:29.185 fused_ordering(63) 00:21:29.185 fused_ordering(64) 00:21:29.185 fused_ordering(65) 00:21:29.185 fused_ordering(66) 00:21:29.185 fused_ordering(67) 00:21:29.185 fused_ordering(68) 00:21:29.185 fused_ordering(69) 00:21:29.185 fused_ordering(70) 00:21:29.185 fused_ordering(71) 00:21:29.185 fused_ordering(72) 00:21:29.185 fused_ordering(73) 00:21:29.185 fused_ordering(74) 00:21:29.185 fused_ordering(75) 00:21:29.185 fused_ordering(76) 00:21:29.185 fused_ordering(77) 00:21:29.185 fused_ordering(78) 00:21:29.185 fused_ordering(79) 00:21:29.185 fused_ordering(80) 00:21:29.185 fused_ordering(81) 00:21:29.185 fused_ordering(82) 00:21:29.185 fused_ordering(83) 00:21:29.185 fused_ordering(84) 00:21:29.185 fused_ordering(85) 00:21:29.185 fused_ordering(86) 00:21:29.185 fused_ordering(87) 00:21:29.185 fused_ordering(88) 00:21:29.185 fused_ordering(89) 00:21:29.185 fused_ordering(90) 00:21:29.185 fused_ordering(91) 00:21:29.185 fused_ordering(92) 00:21:29.185 fused_ordering(93) 00:21:29.185 fused_ordering(94) 00:21:29.185 fused_ordering(95) 00:21:29.185 fused_ordering(96) 00:21:29.185 fused_ordering(97) 00:21:29.185 fused_ordering(98) 00:21:29.185 fused_ordering(99) 00:21:29.185 fused_ordering(100) 00:21:29.185 fused_ordering(101) 00:21:29.185 fused_ordering(102) 00:21:29.185 fused_ordering(103) 00:21:29.185 fused_ordering(104) 00:21:29.185 fused_ordering(105) 00:21:29.185 fused_ordering(106) 00:21:29.185 fused_ordering(107) 00:21:29.185 fused_ordering(108) 00:21:29.185 fused_ordering(109) 00:21:29.185 fused_ordering(110) 00:21:29.185 fused_ordering(111) 00:21:29.185 fused_ordering(112) 00:21:29.185 fused_ordering(113) 00:21:29.185 fused_ordering(114) 00:21:29.185 fused_ordering(115) 00:21:29.185 fused_ordering(116) 00:21:29.185 fused_ordering(117) 00:21:29.185 fused_ordering(118) 00:21:29.185 fused_ordering(119) 00:21:29.185 fused_ordering(120) 00:21:29.185 fused_ordering(121) 00:21:29.185 fused_ordering(122) 00:21:29.185 fused_ordering(123) 00:21:29.185 fused_ordering(124) 00:21:29.185 fused_ordering(125) 00:21:29.185 fused_ordering(126) 00:21:29.185 fused_ordering(127) 00:21:29.185 fused_ordering(128) 00:21:29.185 fused_ordering(129) 00:21:29.185 fused_ordering(130) 00:21:29.185 fused_ordering(131) 00:21:29.185 fused_ordering(132) 00:21:29.185 fused_ordering(133) 00:21:29.185 fused_ordering(134) 00:21:29.185 fused_ordering(135) 00:21:29.185 fused_ordering(136) 00:21:29.185 fused_ordering(137) 00:21:29.185 fused_ordering(138) 00:21:29.185 fused_ordering(139) 00:21:29.185 fused_ordering(140) 00:21:29.185 fused_ordering(141) 00:21:29.185 fused_ordering(142) 00:21:29.185 fused_ordering(143) 00:21:29.185 fused_ordering(144) 00:21:29.185 fused_ordering(145) 00:21:29.185 fused_ordering(146) 00:21:29.185 fused_ordering(147) 00:21:29.185 fused_ordering(148) 00:21:29.185 fused_ordering(149) 00:21:29.185 fused_ordering(150) 00:21:29.185 fused_ordering(151) 00:21:29.185 fused_ordering(152) 00:21:29.185 fused_ordering(153) 00:21:29.185 fused_ordering(154) 00:21:29.185 fused_ordering(155) 00:21:29.185 fused_ordering(156) 00:21:29.185 fused_ordering(157) 00:21:29.185 fused_ordering(158) 00:21:29.185 fused_ordering(159) 00:21:29.185 fused_ordering(160) 00:21:29.185 fused_ordering(161) 00:21:29.185 fused_ordering(162) 00:21:29.185 fused_ordering(163) 00:21:29.185 fused_ordering(164) 00:21:29.185 fused_ordering(165) 00:21:29.185 fused_ordering(166) 00:21:29.185 fused_ordering(167) 00:21:29.185 fused_ordering(168) 00:21:29.185 fused_ordering(169) 00:21:29.185 fused_ordering(170) 00:21:29.185 fused_ordering(171) 00:21:29.185 fused_ordering(172) 00:21:29.185 fused_ordering(173) 00:21:29.185 fused_ordering(174) 00:21:29.185 fused_ordering(175) 00:21:29.185 fused_ordering(176) 00:21:29.185 fused_ordering(177) 00:21:29.185 fused_ordering(178) 00:21:29.185 fused_ordering(179) 00:21:29.185 fused_ordering(180) 00:21:29.185 fused_ordering(181) 00:21:29.185 fused_ordering(182) 00:21:29.185 fused_ordering(183) 00:21:29.185 fused_ordering(184) 00:21:29.185 fused_ordering(185) 00:21:29.185 fused_ordering(186) 00:21:29.185 fused_ordering(187) 00:21:29.185 fused_ordering(188) 00:21:29.185 fused_ordering(189) 00:21:29.185 fused_ordering(190) 00:21:29.185 fused_ordering(191) 00:21:29.185 fused_ordering(192) 00:21:29.185 fused_ordering(193) 00:21:29.185 fused_ordering(194) 00:21:29.185 fused_ordering(195) 00:21:29.185 fused_ordering(196) 00:21:29.185 fused_ordering(197) 00:21:29.185 fused_ordering(198) 00:21:29.185 fused_ordering(199) 00:21:29.185 fused_ordering(200) 00:21:29.185 fused_ordering(201) 00:21:29.185 fused_ordering(202) 00:21:29.185 fused_ordering(203) 00:21:29.185 fused_ordering(204) 00:21:29.185 fused_ordering(205) 00:21:29.756 fused_ordering(206) 00:21:29.756 fused_ordering(207) 00:21:29.756 fused_ordering(208) 00:21:29.756 fused_ordering(209) 00:21:29.756 fused_ordering(210) 00:21:29.756 fused_ordering(211) 00:21:29.756 fused_ordering(212) 00:21:29.756 fused_ordering(213) 00:21:29.756 fused_ordering(214) 00:21:29.756 fused_ordering(215) 00:21:29.756 fused_ordering(216) 00:21:29.756 fused_ordering(217) 00:21:29.756 fused_ordering(218) 00:21:29.756 fused_ordering(219) 00:21:29.756 fused_ordering(220) 00:21:29.756 fused_ordering(221) 00:21:29.756 fused_ordering(222) 00:21:29.756 fused_ordering(223) 00:21:29.756 fused_ordering(224) 00:21:29.756 fused_ordering(225) 00:21:29.756 fused_ordering(226) 00:21:29.756 fused_ordering(227) 00:21:29.756 fused_ordering(228) 00:21:29.756 fused_ordering(229) 00:21:29.756 fused_ordering(230) 00:21:29.756 fused_ordering(231) 00:21:29.756 fused_ordering(232) 00:21:29.756 fused_ordering(233) 00:21:29.756 fused_ordering(234) 00:21:29.757 fused_ordering(235) 00:21:29.757 fused_ordering(236) 00:21:29.757 fused_ordering(237) 00:21:29.757 fused_ordering(238) 00:21:29.757 fused_ordering(239) 00:21:29.757 fused_ordering(240) 00:21:29.757 fused_ordering(241) 00:21:29.757 fused_ordering(242) 00:21:29.757 fused_ordering(243) 00:21:29.757 fused_ordering(244) 00:21:29.757 fused_ordering(245) 00:21:29.757 fused_ordering(246) 00:21:29.757 fused_ordering(247) 00:21:29.757 fused_ordering(248) 00:21:29.757 fused_ordering(249) 00:21:29.757 fused_ordering(250) 00:21:29.757 fused_ordering(251) 00:21:29.757 fused_ordering(252) 00:21:29.757 fused_ordering(253) 00:21:29.757 fused_ordering(254) 00:21:29.757 fused_ordering(255) 00:21:29.757 fused_ordering(256) 00:21:29.757 fused_ordering(257) 00:21:29.757 fused_ordering(258) 00:21:29.757 fused_ordering(259) 00:21:29.757 fused_ordering(260) 00:21:29.757 fused_ordering(261) 00:21:29.757 fused_ordering(262) 00:21:29.757 fused_ordering(263) 00:21:29.757 fused_ordering(264) 00:21:29.757 fused_ordering(265) 00:21:29.757 fused_ordering(266) 00:21:29.757 fused_ordering(267) 00:21:29.757 fused_ordering(268) 00:21:29.757 fused_ordering(269) 00:21:29.757 fused_ordering(270) 00:21:29.757 fused_ordering(271) 00:21:29.757 fused_ordering(272) 00:21:29.757 fused_ordering(273) 00:21:29.757 fused_ordering(274) 00:21:29.757 fused_ordering(275) 00:21:29.757 fused_ordering(276) 00:21:29.757 fused_ordering(277) 00:21:29.757 fused_ordering(278) 00:21:29.757 fused_ordering(279) 00:21:29.757 fused_ordering(280) 00:21:29.757 fused_ordering(281) 00:21:29.757 fused_ordering(282) 00:21:29.757 fused_ordering(283) 00:21:29.757 fused_ordering(284) 00:21:29.757 fused_ordering(285) 00:21:29.757 fused_ordering(286) 00:21:29.757 fused_ordering(287) 00:21:29.757 fused_ordering(288) 00:21:29.757 fused_ordering(289) 00:21:29.757 fused_ordering(290) 00:21:29.757 fused_ordering(291) 00:21:29.757 fused_ordering(292) 00:21:29.757 fused_ordering(293) 00:21:29.757 fused_ordering(294) 00:21:29.757 fused_ordering(295) 00:21:29.757 fused_ordering(296) 00:21:29.757 fused_ordering(297) 00:21:29.757 fused_ordering(298) 00:21:29.757 fused_ordering(299) 00:21:29.757 fused_ordering(300) 00:21:29.757 fused_ordering(301) 00:21:29.757 fused_ordering(302) 00:21:29.757 fused_ordering(303) 00:21:29.757 fused_ordering(304) 00:21:29.757 fused_ordering(305) 00:21:29.757 fused_ordering(306) 00:21:29.757 fused_ordering(307) 00:21:29.757 fused_ordering(308) 00:21:29.757 fused_ordering(309) 00:21:29.757 fused_ordering(310) 00:21:29.757 fused_ordering(311) 00:21:29.757 fused_ordering(312) 00:21:29.757 fused_ordering(313) 00:21:29.757 fused_ordering(314) 00:21:29.757 fused_ordering(315) 00:21:29.757 fused_ordering(316) 00:21:29.757 fused_ordering(317) 00:21:29.757 fused_ordering(318) 00:21:29.757 fused_ordering(319) 00:21:29.757 fused_ordering(320) 00:21:29.757 fused_ordering(321) 00:21:29.757 fused_ordering(322) 00:21:29.757 fused_ordering(323) 00:21:29.757 fused_ordering(324) 00:21:29.757 fused_ordering(325) 00:21:29.757 fused_ordering(326) 00:21:29.757 fused_ordering(327) 00:21:29.757 fused_ordering(328) 00:21:29.757 fused_ordering(329) 00:21:29.757 fused_ordering(330) 00:21:29.757 fused_ordering(331) 00:21:29.757 fused_ordering(332) 00:21:29.757 fused_ordering(333) 00:21:29.757 fused_ordering(334) 00:21:29.757 fused_ordering(335) 00:21:29.757 fused_ordering(336) 00:21:29.757 fused_ordering(337) 00:21:29.757 fused_ordering(338) 00:21:29.757 fused_ordering(339) 00:21:29.757 fused_ordering(340) 00:21:29.757 fused_ordering(341) 00:21:29.757 fused_ordering(342) 00:21:29.757 fused_ordering(343) 00:21:29.757 fused_ordering(344) 00:21:29.757 fused_ordering(345) 00:21:29.757 fused_ordering(346) 00:21:29.757 fused_ordering(347) 00:21:29.757 fused_ordering(348) 00:21:29.757 fused_ordering(349) 00:21:29.757 fused_ordering(350) 00:21:29.757 fused_ordering(351) 00:21:29.757 fused_ordering(352) 00:21:29.757 fused_ordering(353) 00:21:29.757 fused_ordering(354) 00:21:29.757 fused_ordering(355) 00:21:29.757 fused_ordering(356) 00:21:29.757 fused_ordering(357) 00:21:29.757 fused_ordering(358) 00:21:29.757 fused_ordering(359) 00:21:29.757 fused_ordering(360) 00:21:29.757 fused_ordering(361) 00:21:29.757 fused_ordering(362) 00:21:29.757 fused_ordering(363) 00:21:29.757 fused_ordering(364) 00:21:29.757 fused_ordering(365) 00:21:29.757 fused_ordering(366) 00:21:29.757 fused_ordering(367) 00:21:29.757 fused_ordering(368) 00:21:29.757 fused_ordering(369) 00:21:29.757 fused_ordering(370) 00:21:29.757 fused_ordering(371) 00:21:29.757 fused_ordering(372) 00:21:29.757 fused_ordering(373) 00:21:29.757 fused_ordering(374) 00:21:29.757 fused_ordering(375) 00:21:29.757 fused_ordering(376) 00:21:29.757 fused_ordering(377) 00:21:29.757 fused_ordering(378) 00:21:29.757 fused_ordering(379) 00:21:29.757 fused_ordering(380) 00:21:29.757 fused_ordering(381) 00:21:29.757 fused_ordering(382) 00:21:29.757 fused_ordering(383) 00:21:29.757 fused_ordering(384) 00:21:29.757 fused_ordering(385) 00:21:29.757 fused_ordering(386) 00:21:29.757 fused_ordering(387) 00:21:29.757 fused_ordering(388) 00:21:29.757 fused_ordering(389) 00:21:29.757 fused_ordering(390) 00:21:29.757 fused_ordering(391) 00:21:29.757 fused_ordering(392) 00:21:29.757 fused_ordering(393) 00:21:29.757 fused_ordering(394) 00:21:29.757 fused_ordering(395) 00:21:29.757 fused_ordering(396) 00:21:29.757 fused_ordering(397) 00:21:29.757 fused_ordering(398) 00:21:29.757 fused_ordering(399) 00:21:29.757 fused_ordering(400) 00:21:29.757 fused_ordering(401) 00:21:29.757 fused_ordering(402) 00:21:29.757 fused_ordering(403) 00:21:29.757 fused_ordering(404) 00:21:29.757 fused_ordering(405) 00:21:29.757 fused_ordering(406) 00:21:29.757 fused_ordering(407) 00:21:29.757 fused_ordering(408) 00:21:29.757 fused_ordering(409) 00:21:29.757 fused_ordering(410) 00:21:30.019 fused_ordering(411) 00:21:30.019 fused_ordering(412) 00:21:30.019 fused_ordering(413) 00:21:30.019 fused_ordering(414) 00:21:30.019 fused_ordering(415) 00:21:30.019 fused_ordering(416) 00:21:30.019 fused_ordering(417) 00:21:30.019 fused_ordering(418) 00:21:30.019 fused_ordering(419) 00:21:30.019 fused_ordering(420) 00:21:30.019 fused_ordering(421) 00:21:30.019 fused_ordering(422) 00:21:30.019 fused_ordering(423) 00:21:30.019 fused_ordering(424) 00:21:30.019 fused_ordering(425) 00:21:30.019 fused_ordering(426) 00:21:30.019 fused_ordering(427) 00:21:30.019 fused_ordering(428) 00:21:30.019 fused_ordering(429) 00:21:30.019 fused_ordering(430) 00:21:30.019 fused_ordering(431) 00:21:30.019 fused_ordering(432) 00:21:30.019 fused_ordering(433) 00:21:30.019 fused_ordering(434) 00:21:30.019 fused_ordering(435) 00:21:30.019 fused_ordering(436) 00:21:30.019 fused_ordering(437) 00:21:30.019 fused_ordering(438) 00:21:30.019 fused_ordering(439) 00:21:30.019 fused_ordering(440) 00:21:30.019 fused_ordering(441) 00:21:30.019 fused_ordering(442) 00:21:30.019 fused_ordering(443) 00:21:30.019 fused_ordering(444) 00:21:30.019 fused_ordering(445) 00:21:30.019 fused_ordering(446) 00:21:30.019 fused_ordering(447) 00:21:30.019 fused_ordering(448) 00:21:30.019 fused_ordering(449) 00:21:30.019 fused_ordering(450) 00:21:30.019 fused_ordering(451) 00:21:30.019 fused_ordering(452) 00:21:30.019 fused_ordering(453) 00:21:30.019 fused_ordering(454) 00:21:30.019 fused_ordering(455) 00:21:30.019 fused_ordering(456) 00:21:30.019 fused_ordering(457) 00:21:30.019 fused_ordering(458) 00:21:30.019 fused_ordering(459) 00:21:30.019 fused_ordering(460) 00:21:30.019 fused_ordering(461) 00:21:30.019 fused_ordering(462) 00:21:30.019 fused_ordering(463) 00:21:30.019 fused_ordering(464) 00:21:30.019 fused_ordering(465) 00:21:30.019 fused_ordering(466) 00:21:30.019 fused_ordering(467) 00:21:30.019 fused_ordering(468) 00:21:30.019 fused_ordering(469) 00:21:30.019 fused_ordering(470) 00:21:30.019 fused_ordering(471) 00:21:30.019 fused_ordering(472) 00:21:30.019 fused_ordering(473) 00:21:30.019 fused_ordering(474) 00:21:30.019 fused_ordering(475) 00:21:30.019 fused_ordering(476) 00:21:30.019 fused_ordering(477) 00:21:30.019 fused_ordering(478) 00:21:30.019 fused_ordering(479) 00:21:30.019 fused_ordering(480) 00:21:30.019 fused_ordering(481) 00:21:30.019 fused_ordering(482) 00:21:30.019 fused_ordering(483) 00:21:30.019 fused_ordering(484) 00:21:30.019 fused_ordering(485) 00:21:30.019 fused_ordering(486) 00:21:30.019 fused_ordering(487) 00:21:30.019 fused_ordering(488) 00:21:30.019 fused_ordering(489) 00:21:30.019 fused_ordering(490) 00:21:30.019 fused_ordering(491) 00:21:30.019 fused_ordering(492) 00:21:30.019 fused_ordering(493) 00:21:30.019 fused_ordering(494) 00:21:30.019 fused_ordering(495) 00:21:30.019 fused_ordering(496) 00:21:30.019 fused_ordering(497) 00:21:30.019 fused_ordering(498) 00:21:30.019 fused_ordering(499) 00:21:30.019 fused_ordering(500) 00:21:30.019 fused_ordering(501) 00:21:30.019 fused_ordering(502) 00:21:30.019 fused_ordering(503) 00:21:30.019 fused_ordering(504) 00:21:30.019 fused_ordering(505) 00:21:30.019 fused_ordering(506) 00:21:30.019 fused_ordering(507) 00:21:30.019 fused_ordering(508) 00:21:30.019 fused_ordering(509) 00:21:30.019 fused_ordering(510) 00:21:30.019 fused_ordering(511) 00:21:30.019 fused_ordering(512) 00:21:30.019 fused_ordering(513) 00:21:30.019 fused_ordering(514) 00:21:30.019 fused_ordering(515) 00:21:30.019 fused_ordering(516) 00:21:30.020 fused_ordering(517) 00:21:30.020 fused_ordering(518) 00:21:30.020 fused_ordering(519) 00:21:30.020 fused_ordering(520) 00:21:30.020 fused_ordering(521) 00:21:30.020 fused_ordering(522) 00:21:30.020 fused_ordering(523) 00:21:30.020 fused_ordering(524) 00:21:30.020 fused_ordering(525) 00:21:30.020 fused_ordering(526) 00:21:30.020 fused_ordering(527) 00:21:30.020 fused_ordering(528) 00:21:30.020 fused_ordering(529) 00:21:30.020 fused_ordering(530) 00:21:30.020 fused_ordering(531) 00:21:30.020 fused_ordering(532) 00:21:30.020 fused_ordering(533) 00:21:30.020 fused_ordering(534) 00:21:30.020 fused_ordering(535) 00:21:30.020 fused_ordering(536) 00:21:30.020 fused_ordering(537) 00:21:30.020 fused_ordering(538) 00:21:30.020 fused_ordering(539) 00:21:30.020 fused_ordering(540) 00:21:30.020 fused_ordering(541) 00:21:30.020 fused_ordering(542) 00:21:30.020 fused_ordering(543) 00:21:30.020 fused_ordering(544) 00:21:30.020 fused_ordering(545) 00:21:30.020 fused_ordering(546) 00:21:30.020 fused_ordering(547) 00:21:30.020 fused_ordering(548) 00:21:30.020 fused_ordering(549) 00:21:30.020 fused_ordering(550) 00:21:30.020 fused_ordering(551) 00:21:30.020 fused_ordering(552) 00:21:30.020 fused_ordering(553) 00:21:30.020 fused_ordering(554) 00:21:30.020 fused_ordering(555) 00:21:30.020 fused_ordering(556) 00:21:30.020 fused_ordering(557) 00:21:30.020 fused_ordering(558) 00:21:30.020 fused_ordering(559) 00:21:30.020 fused_ordering(560) 00:21:30.020 fused_ordering(561) 00:21:30.020 fused_ordering(562) 00:21:30.020 fused_ordering(563) 00:21:30.020 fused_ordering(564) 00:21:30.020 fused_ordering(565) 00:21:30.020 fused_ordering(566) 00:21:30.020 fused_ordering(567) 00:21:30.020 fused_ordering(568) 00:21:30.020 fused_ordering(569) 00:21:30.020 fused_ordering(570) 00:21:30.020 fused_ordering(571) 00:21:30.020 fused_ordering(572) 00:21:30.020 fused_ordering(573) 00:21:30.020 fused_ordering(574) 00:21:30.020 fused_ordering(575) 00:21:30.020 fused_ordering(576) 00:21:30.020 fused_ordering(577) 00:21:30.020 fused_ordering(578) 00:21:30.020 fused_ordering(579) 00:21:30.020 fused_ordering(580) 00:21:30.020 fused_ordering(581) 00:21:30.020 fused_ordering(582) 00:21:30.020 fused_ordering(583) 00:21:30.020 fused_ordering(584) 00:21:30.020 fused_ordering(585) 00:21:30.020 fused_ordering(586) 00:21:30.020 fused_ordering(587) 00:21:30.020 fused_ordering(588) 00:21:30.020 fused_ordering(589) 00:21:30.020 fused_ordering(590) 00:21:30.020 fused_ordering(591) 00:21:30.020 fused_ordering(592) 00:21:30.020 fused_ordering(593) 00:21:30.020 fused_ordering(594) 00:21:30.020 fused_ordering(595) 00:21:30.020 fused_ordering(596) 00:21:30.020 fused_ordering(597) 00:21:30.020 fused_ordering(598) 00:21:30.020 fused_ordering(599) 00:21:30.020 fused_ordering(600) 00:21:30.020 fused_ordering(601) 00:21:30.020 fused_ordering(602) 00:21:30.020 fused_ordering(603) 00:21:30.020 fused_ordering(604) 00:21:30.020 fused_ordering(605) 00:21:30.020 fused_ordering(606) 00:21:30.020 fused_ordering(607) 00:21:30.020 fused_ordering(608) 00:21:30.020 fused_ordering(609) 00:21:30.020 fused_ordering(610) 00:21:30.020 fused_ordering(611) 00:21:30.020 fused_ordering(612) 00:21:30.020 fused_ordering(613) 00:21:30.020 fused_ordering(614) 00:21:30.020 fused_ordering(615) 00:21:30.592 fused_ordering(616) 00:21:30.592 fused_ordering(617) 00:21:30.592 fused_ordering(618) 00:21:30.592 fused_ordering(619) 00:21:30.592 fused_ordering(620) 00:21:30.592 fused_ordering(621) 00:21:30.592 fused_ordering(622) 00:21:30.592 fused_ordering(623) 00:21:30.592 fused_ordering(624) 00:21:30.592 fused_ordering(625) 00:21:30.592 fused_ordering(626) 00:21:30.592 fused_ordering(627) 00:21:30.592 fused_ordering(628) 00:21:30.592 fused_ordering(629) 00:21:30.592 fused_ordering(630) 00:21:30.592 fused_ordering(631) 00:21:30.592 fused_ordering(632) 00:21:30.592 fused_ordering(633) 00:21:30.592 fused_ordering(634) 00:21:30.592 fused_ordering(635) 00:21:30.592 fused_ordering(636) 00:21:30.592 fused_ordering(637) 00:21:30.592 fused_ordering(638) 00:21:30.592 fused_ordering(639) 00:21:30.592 fused_ordering(640) 00:21:30.592 fused_ordering(641) 00:21:30.592 fused_ordering(642) 00:21:30.592 fused_ordering(643) 00:21:30.592 fused_ordering(644) 00:21:30.592 fused_ordering(645) 00:21:30.592 fused_ordering(646) 00:21:30.592 fused_ordering(647) 00:21:30.592 fused_ordering(648) 00:21:30.592 fused_ordering(649) 00:21:30.592 fused_ordering(650) 00:21:30.592 fused_ordering(651) 00:21:30.592 fused_ordering(652) 00:21:30.592 fused_ordering(653) 00:21:30.592 fused_ordering(654) 00:21:30.592 fused_ordering(655) 00:21:30.592 fused_ordering(656) 00:21:30.592 fused_ordering(657) 00:21:30.592 fused_ordering(658) 00:21:30.592 fused_ordering(659) 00:21:30.592 fused_ordering(660) 00:21:30.592 fused_ordering(661) 00:21:30.592 fused_ordering(662) 00:21:30.592 fused_ordering(663) 00:21:30.592 fused_ordering(664) 00:21:30.592 fused_ordering(665) 00:21:30.592 fused_ordering(666) 00:21:30.592 fused_ordering(667) 00:21:30.592 fused_ordering(668) 00:21:30.592 fused_ordering(669) 00:21:30.592 fused_ordering(670) 00:21:30.592 fused_ordering(671) 00:21:30.592 fused_ordering(672) 00:21:30.592 fused_ordering(673) 00:21:30.592 fused_ordering(674) 00:21:30.592 fused_ordering(675) 00:21:30.592 fused_ordering(676) 00:21:30.592 fused_ordering(677) 00:21:30.592 fused_ordering(678) 00:21:30.592 fused_ordering(679) 00:21:30.592 fused_ordering(680) 00:21:30.592 fused_ordering(681) 00:21:30.592 fused_ordering(682) 00:21:30.592 fused_ordering(683) 00:21:30.592 fused_ordering(684) 00:21:30.592 fused_ordering(685) 00:21:30.592 fused_ordering(686) 00:21:30.592 fused_ordering(687) 00:21:30.592 fused_ordering(688) 00:21:30.592 fused_ordering(689) 00:21:30.592 fused_ordering(690) 00:21:30.592 fused_ordering(691) 00:21:30.592 fused_ordering(692) 00:21:30.592 fused_ordering(693) 00:21:30.592 fused_ordering(694) 00:21:30.592 fused_ordering(695) 00:21:30.592 fused_ordering(696) 00:21:30.592 fused_ordering(697) 00:21:30.592 fused_ordering(698) 00:21:30.592 fused_ordering(699) 00:21:30.592 fused_ordering(700) 00:21:30.592 fused_ordering(701) 00:21:30.592 fused_ordering(702) 00:21:30.592 fused_ordering(703) 00:21:30.592 fused_ordering(704) 00:21:30.592 fused_ordering(705) 00:21:30.592 fused_ordering(706) 00:21:30.592 fused_ordering(707) 00:21:30.592 fused_ordering(708) 00:21:30.592 fused_ordering(709) 00:21:30.592 fused_ordering(710) 00:21:30.592 fused_ordering(711) 00:21:30.592 fused_ordering(712) 00:21:30.592 fused_ordering(713) 00:21:30.592 fused_ordering(714) 00:21:30.592 fused_ordering(715) 00:21:30.592 fused_ordering(716) 00:21:30.592 fused_ordering(717) 00:21:30.592 fused_ordering(718) 00:21:30.592 fused_ordering(719) 00:21:30.592 fused_ordering(720) 00:21:30.592 fused_ordering(721) 00:21:30.592 fused_ordering(722) 00:21:30.592 fused_ordering(723) 00:21:30.592 fused_ordering(724) 00:21:30.592 fused_ordering(725) 00:21:30.592 fused_ordering(726) 00:21:30.592 fused_ordering(727) 00:21:30.592 fused_ordering(728) 00:21:30.592 fused_ordering(729) 00:21:30.592 fused_ordering(730) 00:21:30.592 fused_ordering(731) 00:21:30.592 fused_ordering(732) 00:21:30.592 fused_ordering(733) 00:21:30.592 fused_ordering(734) 00:21:30.592 fused_ordering(735) 00:21:30.592 fused_ordering(736) 00:21:30.592 fused_ordering(737) 00:21:30.592 fused_ordering(738) 00:21:30.592 fused_ordering(739) 00:21:30.592 fused_ordering(740) 00:21:30.592 fused_ordering(741) 00:21:30.592 fused_ordering(742) 00:21:30.592 fused_ordering(743) 00:21:30.592 fused_ordering(744) 00:21:30.592 fused_ordering(745) 00:21:30.592 fused_ordering(746) 00:21:30.592 fused_ordering(747) 00:21:30.592 fused_ordering(748) 00:21:30.592 fused_ordering(749) 00:21:30.592 fused_ordering(750) 00:21:30.592 fused_ordering(751) 00:21:30.592 fused_ordering(752) 00:21:30.592 fused_ordering(753) 00:21:30.592 fused_ordering(754) 00:21:30.592 fused_ordering(755) 00:21:30.592 fused_ordering(756) 00:21:30.592 fused_ordering(757) 00:21:30.592 fused_ordering(758) 00:21:30.592 fused_ordering(759) 00:21:30.592 fused_ordering(760) 00:21:30.592 fused_ordering(761) 00:21:30.592 fused_ordering(762) 00:21:30.592 fused_ordering(763) 00:21:30.592 fused_ordering(764) 00:21:30.592 fused_ordering(765) 00:21:30.592 fused_ordering(766) 00:21:30.592 fused_ordering(767) 00:21:30.592 fused_ordering(768) 00:21:30.592 fused_ordering(769) 00:21:30.592 fused_ordering(770) 00:21:30.592 fused_ordering(771) 00:21:30.592 fused_ordering(772) 00:21:30.592 fused_ordering(773) 00:21:30.592 fused_ordering(774) 00:21:30.592 fused_ordering(775) 00:21:30.592 fused_ordering(776) 00:21:30.592 fused_ordering(777) 00:21:30.592 fused_ordering(778) 00:21:30.592 fused_ordering(779) 00:21:30.592 fused_ordering(780) 00:21:30.592 fused_ordering(781) 00:21:30.592 fused_ordering(782) 00:21:30.592 fused_ordering(783) 00:21:30.592 fused_ordering(784) 00:21:30.592 fused_ordering(785) 00:21:30.592 fused_ordering(786) 00:21:30.592 fused_ordering(787) 00:21:30.592 fused_ordering(788) 00:21:30.592 fused_ordering(789) 00:21:30.592 fused_ordering(790) 00:21:30.592 fused_ordering(791) 00:21:30.592 fused_ordering(792) 00:21:30.592 fused_ordering(793) 00:21:30.592 fused_ordering(794) 00:21:30.592 fused_ordering(795) 00:21:30.592 fused_ordering(796) 00:21:30.592 fused_ordering(797) 00:21:30.592 fused_ordering(798) 00:21:30.592 fused_ordering(799) 00:21:30.592 fused_ordering(800) 00:21:30.592 fused_ordering(801) 00:21:30.592 fused_ordering(802) 00:21:30.592 fused_ordering(803) 00:21:30.592 fused_ordering(804) 00:21:30.592 fused_ordering(805) 00:21:30.592 fused_ordering(806) 00:21:30.592 fused_ordering(807) 00:21:30.592 fused_ordering(808) 00:21:30.592 fused_ordering(809) 00:21:30.592 fused_ordering(810) 00:21:30.592 fused_ordering(811) 00:21:30.592 fused_ordering(812) 00:21:30.592 fused_ordering(813) 00:21:30.592 fused_ordering(814) 00:21:30.592 fused_ordering(815) 00:21:30.592 fused_ordering(816) 00:21:30.592 fused_ordering(817) 00:21:30.592 fused_ordering(818) 00:21:30.592 fused_ordering(819) 00:21:30.592 fused_ordering(820) 00:21:31.535 fused_ordering(821) 00:21:31.535 fused_ordering(822) 00:21:31.535 fused_ordering(823) 00:21:31.535 fused_ordering(824) 00:21:31.535 fused_ordering(825) 00:21:31.535 fused_ordering(826) 00:21:31.535 fused_ordering(827) 00:21:31.535 fused_ordering(828) 00:21:31.535 fused_ordering(829) 00:21:31.535 fused_ordering(830) 00:21:31.535 fused_ordering(831) 00:21:31.535 fused_ordering(832) 00:21:31.535 fused_ordering(833) 00:21:31.535 fused_ordering(834) 00:21:31.535 fused_ordering(835) 00:21:31.535 fused_ordering(836) 00:21:31.535 fused_ordering(837) 00:21:31.535 fused_ordering(838) 00:21:31.535 fused_ordering(839) 00:21:31.535 fused_ordering(840) 00:21:31.535 fused_ordering(841) 00:21:31.535 fused_ordering(842) 00:21:31.535 fused_ordering(843) 00:21:31.535 fused_ordering(844) 00:21:31.535 fused_ordering(845) 00:21:31.535 fused_ordering(846) 00:21:31.535 fused_ordering(847) 00:21:31.535 fused_ordering(848) 00:21:31.535 fused_ordering(849) 00:21:31.535 fused_ordering(850) 00:21:31.535 fused_ordering(851) 00:21:31.535 fused_ordering(852) 00:21:31.535 fused_ordering(853) 00:21:31.535 fused_ordering(854) 00:21:31.535 fused_ordering(855) 00:21:31.535 fused_ordering(856) 00:21:31.535 fused_ordering(857) 00:21:31.535 fused_ordering(858) 00:21:31.535 fused_ordering(859) 00:21:31.535 fused_ordering(860) 00:21:31.535 fused_ordering(861) 00:21:31.535 fused_ordering(862) 00:21:31.535 fused_ordering(863) 00:21:31.535 fused_ordering(864) 00:21:31.535 fused_ordering(865) 00:21:31.535 fused_ordering(866) 00:21:31.535 fused_ordering(867) 00:21:31.535 fused_ordering(868) 00:21:31.535 fused_ordering(869) 00:21:31.535 fused_ordering(870) 00:21:31.535 fused_ordering(871) 00:21:31.535 fused_ordering(872) 00:21:31.535 fused_ordering(873) 00:21:31.535 fused_ordering(874) 00:21:31.535 fused_ordering(875) 00:21:31.535 fused_ordering(876) 00:21:31.535 fused_ordering(877) 00:21:31.535 fused_ordering(878) 00:21:31.535 fused_ordering(879) 00:21:31.535 fused_ordering(880) 00:21:31.535 fused_ordering(881) 00:21:31.535 fused_ordering(882) 00:21:31.535 fused_ordering(883) 00:21:31.535 fused_ordering(884) 00:21:31.535 fused_ordering(885) 00:21:31.535 fused_ordering(886) 00:21:31.535 fused_ordering(887) 00:21:31.535 fused_ordering(888) 00:21:31.535 fused_ordering(889) 00:21:31.535 fused_ordering(890) 00:21:31.535 fused_ordering(891) 00:21:31.535 fused_ordering(892) 00:21:31.535 fused_ordering(893) 00:21:31.535 fused_ordering(894) 00:21:31.535 fused_ordering(895) 00:21:31.535 fused_ordering(896) 00:21:31.535 fused_ordering(897) 00:21:31.535 fused_ordering(898) 00:21:31.535 fused_ordering(899) 00:21:31.535 fused_ordering(900) 00:21:31.535 fused_ordering(901) 00:21:31.535 fused_ordering(902) 00:21:31.535 fused_ordering(903) 00:21:31.535 fused_ordering(904) 00:21:31.535 fused_ordering(905) 00:21:31.535 fused_ordering(906) 00:21:31.535 fused_ordering(907) 00:21:31.535 fused_ordering(908) 00:21:31.535 fused_ordering(909) 00:21:31.535 fused_ordering(910) 00:21:31.535 fused_ordering(911) 00:21:31.535 fused_ordering(912) 00:21:31.535 fused_ordering(913) 00:21:31.535 fused_ordering(914) 00:21:31.535 fused_ordering(915) 00:21:31.535 fused_ordering(916) 00:21:31.535 fused_ordering(917) 00:21:31.535 fused_ordering(918) 00:21:31.535 fused_ordering(919) 00:21:31.535 fused_ordering(920) 00:21:31.535 fused_ordering(921) 00:21:31.535 fused_ordering(922) 00:21:31.535 fused_ordering(923) 00:21:31.535 fused_ordering(924) 00:21:31.535 fused_ordering(925) 00:21:31.535 fused_ordering(926) 00:21:31.535 fused_ordering(927) 00:21:31.535 fused_ordering(928) 00:21:31.535 fused_ordering(929) 00:21:31.535 fused_ordering(930) 00:21:31.535 fused_ordering(931) 00:21:31.535 fused_ordering(932) 00:21:31.535 fused_ordering(933) 00:21:31.535 fused_ordering(934) 00:21:31.535 fused_ordering(935) 00:21:31.535 fused_ordering(936) 00:21:31.535 fused_ordering(937) 00:21:31.535 fused_ordering(938) 00:21:31.535 fused_ordering(939) 00:21:31.535 fused_ordering(940) 00:21:31.535 fused_ordering(941) 00:21:31.535 fused_ordering(942) 00:21:31.535 fused_ordering(943) 00:21:31.535 fused_ordering(944) 00:21:31.535 fused_ordering(945) 00:21:31.535 fused_ordering(946) 00:21:31.535 fused_ordering(947) 00:21:31.535 fused_ordering(948) 00:21:31.535 fused_ordering(949) 00:21:31.535 fused_ordering(950) 00:21:31.535 fused_ordering(951) 00:21:31.535 fused_ordering(952) 00:21:31.535 fused_ordering(953) 00:21:31.535 fused_ordering(954) 00:21:31.535 fused_ordering(955) 00:21:31.535 fused_ordering(956) 00:21:31.535 fused_ordering(957) 00:21:31.535 fused_ordering(958) 00:21:31.535 fused_ordering(959) 00:21:31.535 fused_ordering(960) 00:21:31.535 fused_ordering(961) 00:21:31.535 fused_ordering(962) 00:21:31.535 fused_ordering(963) 00:21:31.535 fused_ordering(964) 00:21:31.535 fused_ordering(965) 00:21:31.535 fused_ordering(966) 00:21:31.535 fused_ordering(967) 00:21:31.535 fused_ordering(968) 00:21:31.535 fused_ordering(969) 00:21:31.535 fused_ordering(970) 00:21:31.535 fused_ordering(971) 00:21:31.535 fused_ordering(972) 00:21:31.535 fused_ordering(973) 00:21:31.535 fused_ordering(974) 00:21:31.535 fused_ordering(975) 00:21:31.535 fused_ordering(976) 00:21:31.535 fused_ordering(977) 00:21:31.535 fused_ordering(978) 00:21:31.535 fused_ordering(979) 00:21:31.535 fused_ordering(980) 00:21:31.535 fused_ordering(981) 00:21:31.535 fused_ordering(982) 00:21:31.535 fused_ordering(983) 00:21:31.535 fused_ordering(984) 00:21:31.535 fused_ordering(985) 00:21:31.535 fused_ordering(986) 00:21:31.535 fused_ordering(987) 00:21:31.535 fused_ordering(988) 00:21:31.535 fused_ordering(989) 00:21:31.535 fused_ordering(990) 00:21:31.535 fused_ordering(991) 00:21:31.535 fused_ordering(992) 00:21:31.535 fused_ordering(993) 00:21:31.535 fused_ordering(994) 00:21:31.535 fused_ordering(995) 00:21:31.535 fused_ordering(996) 00:21:31.535 fused_ordering(997) 00:21:31.535 fused_ordering(998) 00:21:31.535 fused_ordering(999) 00:21:31.535 fused_ordering(1000) 00:21:31.535 fused_ordering(1001) 00:21:31.535 fused_ordering(1002) 00:21:31.535 fused_ordering(1003) 00:21:31.535 fused_ordering(1004) 00:21:31.535 fused_ordering(1005) 00:21:31.535 fused_ordering(1006) 00:21:31.535 fused_ordering(1007) 00:21:31.536 fused_ordering(1008) 00:21:31.536 fused_ordering(1009) 00:21:31.536 fused_ordering(1010) 00:21:31.536 fused_ordering(1011) 00:21:31.536 fused_ordering(1012) 00:21:31.536 fused_ordering(1013) 00:21:31.536 fused_ordering(1014) 00:21:31.536 fused_ordering(1015) 00:21:31.536 fused_ordering(1016) 00:21:31.536 fused_ordering(1017) 00:21:31.536 fused_ordering(1018) 00:21:31.536 fused_ordering(1019) 00:21:31.536 fused_ordering(1020) 00:21:31.536 fused_ordering(1021) 00:21:31.536 fused_ordering(1022) 00:21:31.536 fused_ordering(1023) 00:21:31.536 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:21:31.536 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:21:31.536 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:31.536 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:21:31.536 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:31.536 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:21:31.536 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:31.536 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:31.536 rmmod nvme_tcp 00:21:31.536 rmmod nvme_fabrics 00:21:31.536 rmmod nvme_keyring 00:21:31.536 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:31.536 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:21:31.536 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:21:31.536 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 3000063 ']' 00:21:31.536 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 3000063 00:21:31.536 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 3000063 ']' 00:21:31.536 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 3000063 00:21:31.536 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:21:31.536 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:31.536 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3000063 00:21:31.536 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:31.536 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:31.536 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3000063' 00:21:31.536 killing process with pid 3000063 00:21:31.536 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 3000063 00:21:31.536 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 3000063 00:21:32.108 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:32.108 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:32.108 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:32.108 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:21:32.108 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:32.108 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:21:32.108 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:21:32.108 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:32.108 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:32.108 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.108 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.108 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.657 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:34.657 00:21:34.657 real 0m14.323s 00:21:34.657 user 0m8.253s 00:21:34.657 sys 0m7.199s 00:21:34.657 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:34.657 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:34.657 ************************************ 00:21:34.657 END TEST nvmf_fused_ordering 00:21:34.657 ************************************ 00:21:34.657 14:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:21:34.657 14:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:34.657 14:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:34.657 14:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:34.657 ************************************ 00:21:34.657 START TEST nvmf_ns_masking 00:21:34.657 ************************************ 00:21:34.657 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:21:34.657 * Looking for test storage... 00:21:34.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:34.657 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:34.657 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:21:34.657 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:34.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.657 --rc genhtml_branch_coverage=1 00:21:34.657 --rc genhtml_function_coverage=1 00:21:34.657 --rc genhtml_legend=1 00:21:34.657 --rc geninfo_all_blocks=1 00:21:34.657 --rc geninfo_unexecuted_blocks=1 00:21:34.657 00:21:34.657 ' 00:21:34.657 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:34.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.657 --rc genhtml_branch_coverage=1 00:21:34.657 --rc genhtml_function_coverage=1 00:21:34.657 --rc genhtml_legend=1 00:21:34.657 --rc geninfo_all_blocks=1 00:21:34.657 --rc geninfo_unexecuted_blocks=1 00:21:34.657 00:21:34.657 ' 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:34.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.658 --rc genhtml_branch_coverage=1 00:21:34.658 --rc genhtml_function_coverage=1 00:21:34.658 --rc genhtml_legend=1 00:21:34.658 --rc geninfo_all_blocks=1 00:21:34.658 --rc geninfo_unexecuted_blocks=1 00:21:34.658 00:21:34.658 ' 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:34.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.658 --rc genhtml_branch_coverage=1 00:21:34.658 --rc genhtml_function_coverage=1 00:21:34.658 --rc genhtml_legend=1 00:21:34.658 --rc geninfo_all_blocks=1 00:21:34.658 --rc geninfo_unexecuted_blocks=1 00:21:34.658 00:21:34.658 ' 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=284bade3-7ba6-4b7f-8d12-f8f5cad6761f 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=0dde02db-cb2a-4217-8a31-6fc11cd26813 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=bd0acb7d-8390-4800-8134-64c9b232c8bb 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:21:34.658 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:42.806 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:42.806 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:42.806 Found net devices under 0000:31:00.0: cvl_0_0 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:42.806 Found net devices under 0000:31:00.1: cvl_0_1 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:42.806 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:42.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:21:42.807 00:21:42.807 --- 10.0.0.2 ping statistics --- 00:21:42.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.807 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:21:42.807 00:21:42.807 --- 10.0.0.1 ping statistics --- 00:21:42.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.807 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=3005150 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 3005150 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3005150 ']' 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:42.807 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:42.807 [2024-10-07 14:32:05.547800] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:21:42.807 [2024-10-07 14:32:05.547931] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.807 [2024-10-07 14:32:05.691118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.807 [2024-10-07 14:32:05.871277] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.807 [2024-10-07 14:32:05.871323] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.807 [2024-10-07 14:32:05.871335] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.807 [2024-10-07 14:32:05.871347] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.807 [2024-10-07 14:32:05.871355] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.807 [2024-10-07 14:32:05.872554] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.807 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:42.807 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:21:42.807 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:42.807 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:42.807 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:42.807 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.807 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:42.807 [2024-10-07 14:32:06.506261] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.068 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:21:43.068 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:21:43.068 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:43.068 Malloc1 00:21:43.068 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:21:43.329 Malloc2 00:21:43.329 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:43.588 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:21:43.849 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:43.849 [2024-10-07 14:32:07.457305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.849 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:21:43.849 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bd0acb7d-8390-4800-8134-64c9b232c8bb -a 10.0.0.2 -s 4420 -i 4 00:21:44.110 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:21:44.110 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:21:44.110 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:44.110 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:21:44.110 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:21:46.019 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:46.019 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:46.019 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:46.019 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:46.019 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:46.019 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:21:46.019 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:21:46.019 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:46.019 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:21:46.019 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:21:46.019 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:21:46.019 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:46.019 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:46.279 [ 0]:0x1 00:21:46.279 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:46.279 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:46.279 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4fb291fc37064d1da7a7c47b0a024270 00:21:46.279 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4fb291fc37064d1da7a7c47b0a024270 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:46.279 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:21:46.279 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:21:46.279 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:46.279 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:46.279 [ 0]:0x1 00:21:46.279 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:46.279 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:46.539 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4fb291fc37064d1da7a7c47b0a024270 00:21:46.539 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4fb291fc37064d1da7a7c47b0a024270 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:46.539 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:21:46.539 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:46.539 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:46.539 [ 1]:0x2 00:21:46.539 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:46.539 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:46.539 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=36439751d553463ba6aea3a9ca0ce4b5 00:21:46.539 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 36439751d553463ba6aea3a9ca0ce4b5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:46.539 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:21:46.539 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:46.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:46.539 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:46.799 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:21:47.059 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:21:47.059 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bd0acb7d-8390-4800-8134-64c9b232c8bb -a 10.0.0.2 -s 4420 -i 4 00:21:47.059 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:21:47.059 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:21:47.059 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:47.059 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:21:47.059 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:21:47.059 14:32:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:49.601 [ 0]:0x2 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=36439751d553463ba6aea3a9ca0ce4b5 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 36439751d553463ba6aea3a9ca0ce4b5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:49.601 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:49.601 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:21:49.601 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:49.601 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:49.601 [ 0]:0x1 00:21:49.601 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:49.601 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:49.601 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4fb291fc37064d1da7a7c47b0a024270 00:21:49.601 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4fb291fc37064d1da7a7c47b0a024270 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:49.601 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:21:49.601 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:49.601 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:49.601 [ 1]:0x2 00:21:49.601 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:49.601 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:49.601 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=36439751d553463ba6aea3a9ca0ce4b5 00:21:49.601 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 36439751d553463ba6aea3a9ca0ce4b5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:49.601 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:49.862 [ 0]:0x2 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=36439751d553463ba6aea3a9ca0ce4b5 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 36439751d553463ba6aea3a9ca0ce4b5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:21:49.862 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:50.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:50.123 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:50.123 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:21:50.123 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bd0acb7d-8390-4800-8134-64c9b232c8bb -a 10.0.0.2 -s 4420 -i 4 00:21:50.382 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:21:50.382 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:21:50.382 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:21:50.382 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:21:50.382 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:21:50.382 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:21:52.293 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:21:52.293 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:21:52.293 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:21:52.293 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:21:52.293 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:21:52.293 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:21:52.293 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:21:52.293 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:52.553 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:21:52.553 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:21:52.553 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:21:52.553 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:52.553 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:52.553 [ 0]:0x1 00:21:52.553 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:52.553 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:52.553 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4fb291fc37064d1da7a7c47b0a024270 00:21:52.553 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4fb291fc37064d1da7a7c47b0a024270 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:52.553 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:21:52.553 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:52.553 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:52.553 [ 1]:0x2 00:21:52.553 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:52.553 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:52.813 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=36439751d553463ba6aea3a9ca0ce4b5 00:21:52.813 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 36439751d553463ba6aea3a9ca0ce4b5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:52.813 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:52.813 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:21:52.813 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:21:52.813 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:21:52.813 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:21:52.813 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.813 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:21:52.813 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.813 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:21:52.813 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:52.813 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:52.813 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:52.813 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:53.074 [ 0]:0x2 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=36439751d553463ba6aea3a9ca0ce4b5 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 36439751d553463ba6aea3a9ca0ce4b5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:21:53.074 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:53.336 [2024-10-07 14:32:16.783569] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:21:53.336 request: 00:21:53.336 { 00:21:53.336 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.336 "nsid": 2, 00:21:53.336 "host": "nqn.2016-06.io.spdk:host1", 00:21:53.336 "method": "nvmf_ns_remove_host", 00:21:53.336 "req_id": 1 00:21:53.336 } 00:21:53.336 Got JSON-RPC error response 00:21:53.336 response: 00:21:53.336 { 00:21:53.336 "code": -32602, 00:21:53.336 "message": "Invalid parameters" 00:21:53.336 } 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:53.336 [ 0]:0x2 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=36439751d553463ba6aea3a9ca0ce4b5 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 36439751d553463ba6aea3a9ca0ce4b5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:21:53.336 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:53.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:53.598 14:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3007561 00:21:53.598 14:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.598 14:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:21:53.598 14:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3007561 /var/tmp/host.sock 00:21:53.598 14:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3007561 ']' 00:21:53.598 14:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:21:53.598 14:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:53.598 14:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:53.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:53.598 14:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:53.598 14:32:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:53.598 [2024-10-07 14:32:17.191521] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:21:53.598 [2024-10-07 14:32:17.191629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007561 ] 00:21:53.858 [2024-10-07 14:32:17.326281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.858 [2024-10-07 14:32:17.503570] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.430 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:54.430 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:21:54.430 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:54.691 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:21:54.951 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 284bade3-7ba6-4b7f-8d12-f8f5cad6761f 00:21:54.951 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:21:54.951 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 284BADE37BA64B7F8D12F8F5CAD6761F -i 00:21:54.951 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 0dde02db-cb2a-4217-8a31-6fc11cd26813 00:21:54.951 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:21:54.951 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 0DDE02DBCB2A42178A316FC11CD26813 -i 00:21:55.211 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:55.472 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:21:55.472 14:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:21:55.472 14:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:21:55.733 nvme0n1 00:21:55.733 14:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:21:55.733 14:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:21:55.993 nvme1n2 00:21:55.993 14:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:21:55.993 14:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:21:55.993 14:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:21:55.993 14:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:21:55.993 14:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:21:56.254 14:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:21:56.254 14:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:21:56.254 14:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:21:56.254 14:32:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:21:56.514 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 284bade3-7ba6-4b7f-8d12-f8f5cad6761f == \2\8\4\b\a\d\e\3\-\7\b\a\6\-\4\b\7\f\-\8\d\1\2\-\f\8\f\5\c\a\d\6\7\6\1\f ]] 00:21:56.514 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:21:56.514 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:21:56.514 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:21:56.514 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 0dde02db-cb2a-4217-8a31-6fc11cd26813 == \0\d\d\e\0\2\d\b\-\c\b\2\a\-\4\2\1\7\-\8\a\3\1\-\6\f\c\1\1\c\d\2\6\8\1\3 ]] 00:21:56.514 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3007561 00:21:56.514 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3007561 ']' 00:21:56.514 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3007561 00:21:56.514 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:21:56.775 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:56.775 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3007561 00:21:56.775 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:56.775 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:56.775 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3007561' 00:21:56.775 killing process with pid 3007561 00:21:56.775 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3007561 00:21:56.775 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3007561 00:21:58.161 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:58.421 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:21:58.421 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:21:58.421 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:58.421 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:21:58.421 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.421 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:21:58.421 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.421 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.421 rmmod nvme_tcp 00:21:58.421 rmmod nvme_fabrics 00:21:58.421 rmmod nvme_keyring 00:21:58.421 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:58.421 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:21:58.421 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:21:58.421 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 3005150 ']' 00:21:58.421 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 3005150 00:21:58.421 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3005150 ']' 00:21:58.421 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3005150 00:21:58.421 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:21:58.421 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:58.421 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3005150 00:21:58.421 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:58.421 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:58.421 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3005150' 00:21:58.421 killing process with pid 3005150 00:21:58.421 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3005150 00:21:58.421 14:32:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3005150 00:21:59.807 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:59.807 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:59.807 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:59.807 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:21:59.807 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:21:59.807 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:59.807 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:21:59.807 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:59.807 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:59.807 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.807 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.807 14:32:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.721 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:01.721 00:22:01.721 real 0m27.373s 00:22:01.721 user 0m28.602s 00:22:01.721 sys 0m7.966s 00:22:01.721 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:01.721 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:22:01.721 ************************************ 00:22:01.721 END TEST nvmf_ns_masking 00:22:01.721 ************************************ 00:22:01.721 14:32:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:22:01.721 14:32:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:22:01.721 14:32:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:01.721 14:32:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:01.721 14:32:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:01.721 ************************************ 00:22:01.721 START TEST nvmf_nvme_cli 00:22:01.721 ************************************ 00:22:01.721 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:22:01.721 * Looking for test storage... 00:22:01.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:01.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.984 --rc genhtml_branch_coverage=1 00:22:01.984 --rc genhtml_function_coverage=1 00:22:01.984 --rc genhtml_legend=1 00:22:01.984 --rc geninfo_all_blocks=1 00:22:01.984 --rc geninfo_unexecuted_blocks=1 00:22:01.984 00:22:01.984 ' 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:01.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.984 --rc genhtml_branch_coverage=1 00:22:01.984 --rc genhtml_function_coverage=1 00:22:01.984 --rc genhtml_legend=1 00:22:01.984 --rc geninfo_all_blocks=1 00:22:01.984 --rc geninfo_unexecuted_blocks=1 00:22:01.984 00:22:01.984 ' 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:01.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.984 --rc genhtml_branch_coverage=1 00:22:01.984 --rc genhtml_function_coverage=1 00:22:01.984 --rc genhtml_legend=1 00:22:01.984 --rc geninfo_all_blocks=1 00:22:01.984 --rc geninfo_unexecuted_blocks=1 00:22:01.984 00:22:01.984 ' 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:01.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.984 --rc genhtml_branch_coverage=1 00:22:01.984 --rc genhtml_function_coverage=1 00:22:01.984 --rc genhtml_legend=1 00:22:01.984 --rc geninfo_all_blocks=1 00:22:01.984 --rc geninfo_unexecuted_blocks=1 00:22:01.984 00:22:01.984 ' 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:01.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:01.984 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:01.985 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:01.985 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:01.985 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:01.985 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:01.985 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:22:01.985 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:22:01.985 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:01.985 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.985 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:01.985 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:01.985 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:01.985 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.985 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.985 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.985 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:01.985 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:01.985 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.985 14:32:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:10.341 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.341 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:22:10.341 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:10.341 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:10.341 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:10.341 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:10.341 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:10.341 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:22:10.341 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:10.341 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:22:10.341 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:22:10.341 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:22:10.341 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:10.342 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:10.342 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:10.342 Found net devices under 0000:31:00.0: cvl_0_0 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:10.342 Found net devices under 0000:31:00.1: cvl_0_1 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:10.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:22:10.342 00:22:10.342 --- 10.0.0.2 ping statistics --- 00:22:10.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.342 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:22:10.342 00:22:10.342 --- 10.0.0.1 ping statistics --- 00:22:10.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.342 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=3012965 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 3012965 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3012965 ']' 00:22:10.342 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.343 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:10.343 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.343 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:10.343 14:32:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:10.343 [2024-10-07 14:32:33.044571] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:22:10.343 [2024-10-07 14:32:33.044707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.343 [2024-10-07 14:32:33.184413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:10.343 [2024-10-07 14:32:33.367897] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.343 [2024-10-07 14:32:33.367947] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.343 [2024-10-07 14:32:33.367959] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.343 [2024-10-07 14:32:33.367971] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.343 [2024-10-07 14:32:33.367980] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.343 [2024-10-07 14:32:33.370499] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.343 [2024-10-07 14:32:33.370577] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.343 [2024-10-07 14:32:33.370697] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.343 [2024-10-07 14:32:33.370718] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:10.343 [2024-10-07 14:32:33.854234] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:10.343 Malloc0 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:10.343 Malloc1 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.343 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:10.343 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.343 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:10.343 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.343 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:10.343 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.343 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:10.343 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.343 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:10.343 [2024-10-07 14:32:34.022334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.343 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.343 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:10.343 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.343 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:10.343 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.343 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:22:10.604 00:22:10.604 Discovery Log Number of Records 2, Generation counter 2 00:22:10.604 =====Discovery Log Entry 0====== 00:22:10.604 trtype: tcp 00:22:10.604 adrfam: ipv4 00:22:10.604 subtype: current discovery subsystem 00:22:10.604 treq: not required 00:22:10.604 portid: 0 00:22:10.604 trsvcid: 4420 00:22:10.604 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:10.604 traddr: 10.0.0.2 00:22:10.604 eflags: explicit discovery connections, duplicate discovery information 00:22:10.604 sectype: none 00:22:10.604 =====Discovery Log Entry 1====== 00:22:10.604 trtype: tcp 00:22:10.604 adrfam: ipv4 00:22:10.604 subtype: nvme subsystem 00:22:10.604 treq: not required 00:22:10.604 portid: 0 00:22:10.604 trsvcid: 4420 00:22:10.604 subnqn: nqn.2016-06.io.spdk:cnode1 00:22:10.604 traddr: 10.0.0.2 00:22:10.604 eflags: none 00:22:10.604 sectype: none 00:22:10.605 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:22:10.605 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:22:10.605 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:22:10.605 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:22:10.605 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:22:10.605 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:22:10.605 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:22:10.605 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:22:10.605 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:22:10.605 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:22:10.605 14:32:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:12.519 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:22:12.519 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:22:12.519 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:22:12.519 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:22:12.519 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:22:12.519 14:32:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:22:14.430 /dev/nvme0n2 ]] 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:22:14.430 14:32:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:14.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:14.430 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:14.430 rmmod nvme_tcp 00:22:14.430 rmmod nvme_fabrics 00:22:14.430 rmmod nvme_keyring 00:22:14.690 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:14.690 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:22:14.690 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:22:14.690 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 3012965 ']' 00:22:14.690 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 3012965 00:22:14.690 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3012965 ']' 00:22:14.690 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3012965 00:22:14.690 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:22:14.690 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:14.690 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3012965 00:22:14.690 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:14.690 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:14.690 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3012965' 00:22:14.690 killing process with pid 3012965 00:22:14.690 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3012965 00:22:14.690 14:32:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3012965 00:22:15.630 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:15.630 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:15.630 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:15.630 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:22:15.630 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:22:15.630 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:15.630 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:22:15.630 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:15.630 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:15.630 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.630 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.630 14:32:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:18.171 00:22:18.171 real 0m16.059s 00:22:18.171 user 0m24.980s 00:22:18.171 sys 0m6.336s 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:22:18.171 ************************************ 00:22:18.171 END TEST nvmf_nvme_cli 00:22:18.171 ************************************ 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:18.171 ************************************ 00:22:18.171 START TEST nvmf_auth_target 00:22:18.171 ************************************ 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:18.171 * Looking for test storage... 00:22:18.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:18.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.171 --rc genhtml_branch_coverage=1 00:22:18.171 --rc genhtml_function_coverage=1 00:22:18.171 --rc genhtml_legend=1 00:22:18.171 --rc geninfo_all_blocks=1 00:22:18.171 --rc geninfo_unexecuted_blocks=1 00:22:18.171 00:22:18.171 ' 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:18.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.171 --rc genhtml_branch_coverage=1 00:22:18.171 --rc genhtml_function_coverage=1 00:22:18.171 --rc genhtml_legend=1 00:22:18.171 --rc geninfo_all_blocks=1 00:22:18.171 --rc geninfo_unexecuted_blocks=1 00:22:18.171 00:22:18.171 ' 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:18.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.171 --rc genhtml_branch_coverage=1 00:22:18.171 --rc genhtml_function_coverage=1 00:22:18.171 --rc genhtml_legend=1 00:22:18.171 --rc geninfo_all_blocks=1 00:22:18.171 --rc geninfo_unexecuted_blocks=1 00:22:18.171 00:22:18.171 ' 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:18.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.171 --rc genhtml_branch_coverage=1 00:22:18.171 --rc genhtml_function_coverage=1 00:22:18.171 --rc genhtml_legend=1 00:22:18.171 --rc geninfo_all_blocks=1 00:22:18.171 --rc geninfo_unexecuted_blocks=1 00:22:18.171 00:22:18.171 ' 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.171 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:18.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:22:18.172 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:26.309 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:26.309 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:26.309 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:26.310 Found net devices under 0000:31:00.0: cvl_0_0 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:26.310 Found net devices under 0000:31:00.1: cvl_0_1 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.310 14:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:26.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:22:26.310 00:22:26.310 --- 10.0.0.2 ping statistics --- 00:22:26.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.310 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:22:26.310 00:22:26.310 --- 10.0.0.1 ping statistics --- 00:22:26.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.310 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=3018548 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 3018548 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3018548 ']' 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:26.310 14:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3018578 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=2cbe4eaecbf1a2191e931f437e55214684d1b6f4849c15f4 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Ryi 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 2cbe4eaecbf1a2191e931f437e55214684d1b6f4849c15f4 0 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 2cbe4eaecbf1a2191e931f437e55214684d1b6f4849c15f4 0 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=2cbe4eaecbf1a2191e931f437e55214684d1b6f4849c15f4 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Ryi 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Ryi 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Ryi 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=12bfcd0f1998fe73a4c297562c510a4b2c550e75d7b74ae44e987500941cd502 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.0og 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 12bfcd0f1998fe73a4c297562c510a4b2c550e75d7b74ae44e987500941cd502 3 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 12bfcd0f1998fe73a4c297562c510a4b2c550e75d7b74ae44e987500941cd502 3 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=12bfcd0f1998fe73a4c297562c510a4b2c550e75d7b74ae44e987500941cd502 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:22:26.571 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.0og 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.0og 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.0og 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=22757035fee0e795b5b72b1a6ca48fff 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.6Jd 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 22757035fee0e795b5b72b1a6ca48fff 1 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 22757035fee0e795b5b72b1a6ca48fff 1 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=22757035fee0e795b5b72b1a6ca48fff 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.6Jd 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.6Jd 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.6Jd 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=5ac3a536a946bf1540c6a11ab14c877e5e61b7147239bf2b 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.kmx 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 5ac3a536a946bf1540c6a11ab14c877e5e61b7147239bf2b 2 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 5ac3a536a946bf1540c6a11ab14c877e5e61b7147239bf2b 2 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=5ac3a536a946bf1540c6a11ab14c877e5e61b7147239bf2b 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.kmx 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.kmx 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.kmx 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=14bdf0b93a312f21d78ec89773059ce2ad6853c4a528a763 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:22:26.832 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.s6i 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 14bdf0b93a312f21d78ec89773059ce2ad6853c4a528a763 2 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 14bdf0b93a312f21d78ec89773059ce2ad6853c4a528a763 2 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=14bdf0b93a312f21d78ec89773059ce2ad6853c4a528a763 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.s6i 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.s6i 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.s6i 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=6cf49aae78954cfe66d66812f6213c9e 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.OKB 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 6cf49aae78954cfe66d66812f6213c9e 1 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 6cf49aae78954cfe66d66812f6213c9e 1 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=6cf49aae78954cfe66d66812f6213c9e 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:22:26.833 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.OKB 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.OKB 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.OKB 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=f608c25210c41f640aa2d4726024f2728062dd9581168f97b9f8889cb99bac09 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.y9T 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key f608c25210c41f640aa2d4726024f2728062dd9581168f97b9f8889cb99bac09 3 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 f608c25210c41f640aa2d4726024f2728062dd9581168f97b9f8889cb99bac09 3 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=f608c25210c41f640aa2d4726024f2728062dd9581168f97b9f8889cb99bac09 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.y9T 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.y9T 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.y9T 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3018548 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3018548 ']' 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:27.094 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.354 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.354 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:27.354 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3018578 /var/tmp/host.sock 00:22:27.354 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3018578 ']' 00:22:27.354 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:22:27.355 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:27.355 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:22:27.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:22:27.355 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:27.355 14:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.614 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.614 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:27.614 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:22:27.614 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.614 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.614 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.614 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:27.614 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ryi 00:22:27.614 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.614 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.614 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.614 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Ryi 00:22:27.614 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Ryi 00:22:27.874 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.0og ]] 00:22:27.874 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0og 00:22:27.874 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.874 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.874 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.874 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0og 00:22:27.874 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0og 00:22:28.134 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:28.134 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.6Jd 00:22:28.134 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.134 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.134 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.134 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.6Jd 00:22:28.134 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.6Jd 00:22:28.134 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.kmx ]] 00:22:28.134 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kmx 00:22:28.134 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.134 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.134 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.134 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kmx 00:22:28.134 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kmx 00:22:28.394 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:28.394 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.s6i 00:22:28.394 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.394 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.394 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.394 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.s6i 00:22:28.394 14:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.s6i 00:22:28.394 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.OKB ]] 00:22:28.394 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OKB 00:22:28.654 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.654 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.654 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.654 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OKB 00:22:28.654 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OKB 00:22:28.654 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:22:28.654 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.y9T 00:22:28.654 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.654 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.654 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.654 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.y9T 00:22:28.655 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.y9T 00:22:28.915 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:22:28.915 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:28.915 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:28.915 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.915 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:28.915 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:29.176 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:22:29.176 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:29.176 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:29.176 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:29.176 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:29.176 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.176 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.176 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.176 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.176 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.176 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.176 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.176 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:29.176 00:22:29.176 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.176 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.176 14:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.437 14:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.437 14:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.437 14:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.437 14:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.437 14:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.437 14:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.437 { 00:22:29.437 "cntlid": 1, 00:22:29.437 "qid": 0, 00:22:29.437 "state": "enabled", 00:22:29.437 "thread": "nvmf_tgt_poll_group_000", 00:22:29.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:29.437 "listen_address": { 00:22:29.437 "trtype": "TCP", 00:22:29.437 "adrfam": "IPv4", 00:22:29.437 "traddr": "10.0.0.2", 00:22:29.437 "trsvcid": "4420" 00:22:29.437 }, 00:22:29.437 "peer_address": { 00:22:29.437 "trtype": "TCP", 00:22:29.437 "adrfam": "IPv4", 00:22:29.437 "traddr": "10.0.0.1", 00:22:29.437 "trsvcid": "46380" 00:22:29.437 }, 00:22:29.437 "auth": { 00:22:29.437 "state": "completed", 00:22:29.437 "digest": "sha256", 00:22:29.437 "dhgroup": "null" 00:22:29.437 } 00:22:29.437 } 00:22:29.437 ]' 00:22:29.437 14:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.437 14:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:29.437 14:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.697 14:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:29.697 14:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.697 14:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.697 14:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.697 14:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.698 14:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:22:29.698 14:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:22:30.640 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.641 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.902 00:22:30.902 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.902 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.902 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.162 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.162 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.162 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.162 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.163 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.163 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.163 { 00:22:31.163 "cntlid": 3, 00:22:31.163 "qid": 0, 00:22:31.163 "state": "enabled", 00:22:31.163 "thread": "nvmf_tgt_poll_group_000", 00:22:31.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:31.163 "listen_address": { 00:22:31.163 "trtype": "TCP", 00:22:31.163 "adrfam": "IPv4", 00:22:31.163 "traddr": "10.0.0.2", 00:22:31.163 "trsvcid": "4420" 00:22:31.163 }, 00:22:31.163 "peer_address": { 00:22:31.163 "trtype": "TCP", 00:22:31.163 "adrfam": "IPv4", 00:22:31.163 "traddr": "10.0.0.1", 00:22:31.163 "trsvcid": "46420" 00:22:31.163 }, 00:22:31.163 "auth": { 00:22:31.163 "state": "completed", 00:22:31.163 "digest": "sha256", 00:22:31.163 "dhgroup": "null" 00:22:31.163 } 00:22:31.163 } 00:22:31.163 ]' 00:22:31.163 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:31.163 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:31.163 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.163 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:31.163 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.163 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.163 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.163 14:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.422 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:22:31.423 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.363 14:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.363 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.624 00:22:32.624 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.624 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.624 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.885 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.885 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.885 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.885 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.885 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.885 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.885 { 00:22:32.885 "cntlid": 5, 00:22:32.885 "qid": 0, 00:22:32.885 "state": "enabled", 00:22:32.885 "thread": "nvmf_tgt_poll_group_000", 00:22:32.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:32.885 "listen_address": { 00:22:32.885 "trtype": "TCP", 00:22:32.885 "adrfam": "IPv4", 00:22:32.885 "traddr": "10.0.0.2", 00:22:32.885 "trsvcid": "4420" 00:22:32.885 }, 00:22:32.885 "peer_address": { 00:22:32.885 "trtype": "TCP", 00:22:32.885 "adrfam": "IPv4", 00:22:32.885 "traddr": "10.0.0.1", 00:22:32.885 "trsvcid": "33274" 00:22:32.885 }, 00:22:32.885 "auth": { 00:22:32.885 "state": "completed", 00:22:32.885 "digest": "sha256", 00:22:32.885 "dhgroup": "null" 00:22:32.885 } 00:22:32.885 } 00:22:32.885 ]' 00:22:32.885 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.885 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:32.885 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.885 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:32.885 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.885 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.885 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.885 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.146 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:22:33.146 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:22:33.718 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.978 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:33.978 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.978 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.978 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.978 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.978 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:33.978 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:33.978 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:22:33.978 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.978 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:33.978 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:33.978 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:33.978 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.978 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:33.978 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.978 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.979 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.979 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:33.979 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:33.979 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:34.239 00:22:34.239 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:34.239 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.239 14:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.500 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.500 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.500 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.500 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.500 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.500 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:34.500 { 00:22:34.500 "cntlid": 7, 00:22:34.501 "qid": 0, 00:22:34.501 "state": "enabled", 00:22:34.501 "thread": "nvmf_tgt_poll_group_000", 00:22:34.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:34.501 "listen_address": { 00:22:34.501 "trtype": "TCP", 00:22:34.501 "adrfam": "IPv4", 00:22:34.501 "traddr": "10.0.0.2", 00:22:34.501 "trsvcid": "4420" 00:22:34.501 }, 00:22:34.501 "peer_address": { 00:22:34.501 "trtype": "TCP", 00:22:34.501 "adrfam": "IPv4", 00:22:34.501 "traddr": "10.0.0.1", 00:22:34.501 "trsvcid": "33290" 00:22:34.501 }, 00:22:34.501 "auth": { 00:22:34.501 "state": "completed", 00:22:34.501 "digest": "sha256", 00:22:34.501 "dhgroup": "null" 00:22:34.501 } 00:22:34.501 } 00:22:34.501 ]' 00:22:34.501 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:34.501 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:34.501 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:34.501 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:34.501 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:34.501 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.501 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.501 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.761 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:22:34.761 14:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:22:35.702 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.702 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:35.702 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.702 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.703 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.703 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:35.703 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:35.703 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:35.703 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:35.703 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:22:35.703 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:35.703 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:35.703 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:35.703 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:35.703 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.703 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.703 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.703 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.703 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.703 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.703 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.703 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.964 00:22:35.964 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.964 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.964 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.224 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.225 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.225 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.225 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.225 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.225 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:36.225 { 00:22:36.225 "cntlid": 9, 00:22:36.225 "qid": 0, 00:22:36.225 "state": "enabled", 00:22:36.225 "thread": "nvmf_tgt_poll_group_000", 00:22:36.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:36.225 "listen_address": { 00:22:36.225 "trtype": "TCP", 00:22:36.225 "adrfam": "IPv4", 00:22:36.225 "traddr": "10.0.0.2", 00:22:36.225 "trsvcid": "4420" 00:22:36.225 }, 00:22:36.225 "peer_address": { 00:22:36.225 "trtype": "TCP", 00:22:36.225 "adrfam": "IPv4", 00:22:36.225 "traddr": "10.0.0.1", 00:22:36.225 "trsvcid": "33312" 00:22:36.225 }, 00:22:36.225 "auth": { 00:22:36.225 "state": "completed", 00:22:36.225 "digest": "sha256", 00:22:36.225 "dhgroup": "ffdhe2048" 00:22:36.225 } 00:22:36.225 } 00:22:36.225 ]' 00:22:36.225 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:36.225 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:36.225 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:36.225 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:36.225 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:36.225 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.225 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.225 14:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.485 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:22:36.485 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:22:37.428 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.428 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:37.428 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.428 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.428 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.428 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:37.428 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:37.428 14:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:37.428 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:22:37.428 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:37.428 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:37.428 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:37.428 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:37.428 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.428 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.428 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.428 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.428 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.428 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.428 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.428 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.689 00:22:37.689 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.689 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.689 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.951 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.951 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.951 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.951 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.951 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.951 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.951 { 00:22:37.951 "cntlid": 11, 00:22:37.951 "qid": 0, 00:22:37.951 "state": "enabled", 00:22:37.951 "thread": "nvmf_tgt_poll_group_000", 00:22:37.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:37.951 "listen_address": { 00:22:37.951 "trtype": "TCP", 00:22:37.951 "adrfam": "IPv4", 00:22:37.951 "traddr": "10.0.0.2", 00:22:37.951 "trsvcid": "4420" 00:22:37.951 }, 00:22:37.951 "peer_address": { 00:22:37.951 "trtype": "TCP", 00:22:37.951 "adrfam": "IPv4", 00:22:37.951 "traddr": "10.0.0.1", 00:22:37.951 "trsvcid": "33344" 00:22:37.951 }, 00:22:37.951 "auth": { 00:22:37.951 "state": "completed", 00:22:37.951 "digest": "sha256", 00:22:37.951 "dhgroup": "ffdhe2048" 00:22:37.951 } 00:22:37.951 } 00:22:37.951 ]' 00:22:37.951 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.951 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:37.951 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.951 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:37.951 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.211 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.211 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.211 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.212 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:22:38.212 14:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.155 14:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.416 00:22:39.416 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:39.416 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.416 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.678 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.678 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.678 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.678 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.678 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.678 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:39.678 { 00:22:39.678 "cntlid": 13, 00:22:39.678 "qid": 0, 00:22:39.678 "state": "enabled", 00:22:39.678 "thread": "nvmf_tgt_poll_group_000", 00:22:39.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:39.678 "listen_address": { 00:22:39.678 "trtype": "TCP", 00:22:39.678 "adrfam": "IPv4", 00:22:39.678 "traddr": "10.0.0.2", 00:22:39.678 "trsvcid": "4420" 00:22:39.678 }, 00:22:39.678 "peer_address": { 00:22:39.678 "trtype": "TCP", 00:22:39.678 "adrfam": "IPv4", 00:22:39.678 "traddr": "10.0.0.1", 00:22:39.678 "trsvcid": "33374" 00:22:39.678 }, 00:22:39.678 "auth": { 00:22:39.678 "state": "completed", 00:22:39.678 "digest": "sha256", 00:22:39.678 "dhgroup": "ffdhe2048" 00:22:39.678 } 00:22:39.678 } 00:22:39.678 ]' 00:22:39.678 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:39.678 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:39.678 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:39.678 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:39.678 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:39.678 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.678 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.678 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.939 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:22:39.939 14:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:40.881 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:41.142 00:22:41.142 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.142 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.142 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.404 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.404 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.404 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.404 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.404 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.404 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:41.404 { 00:22:41.404 "cntlid": 15, 00:22:41.404 "qid": 0, 00:22:41.404 "state": "enabled", 00:22:41.404 "thread": "nvmf_tgt_poll_group_000", 00:22:41.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:41.404 "listen_address": { 00:22:41.404 "trtype": "TCP", 00:22:41.404 "adrfam": "IPv4", 00:22:41.404 "traddr": "10.0.0.2", 00:22:41.404 "trsvcid": "4420" 00:22:41.404 }, 00:22:41.404 "peer_address": { 00:22:41.404 "trtype": "TCP", 00:22:41.404 "adrfam": "IPv4", 00:22:41.404 "traddr": "10.0.0.1", 00:22:41.404 "trsvcid": "33400" 00:22:41.404 }, 00:22:41.404 "auth": { 00:22:41.404 "state": "completed", 00:22:41.404 "digest": "sha256", 00:22:41.404 "dhgroup": "ffdhe2048" 00:22:41.404 } 00:22:41.404 } 00:22:41.404 ]' 00:22:41.404 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:41.404 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:41.404 14:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:41.404 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:41.404 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:41.404 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.404 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.404 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.664 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:22:41.664 14:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.607 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:42.867 00:22:42.868 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:42.868 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:42.868 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.129 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.129 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.129 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.129 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.129 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.129 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.129 { 00:22:43.129 "cntlid": 17, 00:22:43.129 "qid": 0, 00:22:43.129 "state": "enabled", 00:22:43.129 "thread": "nvmf_tgt_poll_group_000", 00:22:43.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:43.129 "listen_address": { 00:22:43.129 "trtype": "TCP", 00:22:43.129 "adrfam": "IPv4", 00:22:43.129 "traddr": "10.0.0.2", 00:22:43.129 "trsvcid": "4420" 00:22:43.129 }, 00:22:43.129 "peer_address": { 00:22:43.129 "trtype": "TCP", 00:22:43.129 "adrfam": "IPv4", 00:22:43.129 "traddr": "10.0.0.1", 00:22:43.129 "trsvcid": "40404" 00:22:43.129 }, 00:22:43.129 "auth": { 00:22:43.129 "state": "completed", 00:22:43.129 "digest": "sha256", 00:22:43.129 "dhgroup": "ffdhe3072" 00:22:43.129 } 00:22:43.129 } 00:22:43.129 ]' 00:22:43.129 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.129 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:43.129 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.129 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:43.129 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.129 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.129 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.129 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.390 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:22:43.390 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.332 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:44.593 00:22:44.593 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:44.593 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:44.593 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.853 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.853 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.853 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.853 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.853 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.853 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.853 { 00:22:44.853 "cntlid": 19, 00:22:44.853 "qid": 0, 00:22:44.853 "state": "enabled", 00:22:44.853 "thread": "nvmf_tgt_poll_group_000", 00:22:44.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:44.853 "listen_address": { 00:22:44.853 "trtype": "TCP", 00:22:44.853 "adrfam": "IPv4", 00:22:44.853 "traddr": "10.0.0.2", 00:22:44.853 "trsvcid": "4420" 00:22:44.853 }, 00:22:44.853 "peer_address": { 00:22:44.853 "trtype": "TCP", 00:22:44.853 "adrfam": "IPv4", 00:22:44.853 "traddr": "10.0.0.1", 00:22:44.853 "trsvcid": "40448" 00:22:44.853 }, 00:22:44.853 "auth": { 00:22:44.853 "state": "completed", 00:22:44.853 "digest": "sha256", 00:22:44.853 "dhgroup": "ffdhe3072" 00:22:44.853 } 00:22:44.853 } 00:22:44.853 ]' 00:22:44.853 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.853 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:44.853 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.853 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:44.853 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.853 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.853 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.853 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.113 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:22:45.113 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.057 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.317 00:22:46.317 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.317 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.317 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.579 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.579 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.579 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.579 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.579 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.579 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.579 { 00:22:46.579 "cntlid": 21, 00:22:46.579 "qid": 0, 00:22:46.579 "state": "enabled", 00:22:46.579 "thread": "nvmf_tgt_poll_group_000", 00:22:46.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:46.579 "listen_address": { 00:22:46.579 "trtype": "TCP", 00:22:46.579 "adrfam": "IPv4", 00:22:46.579 "traddr": "10.0.0.2", 00:22:46.579 "trsvcid": "4420" 00:22:46.579 }, 00:22:46.579 "peer_address": { 00:22:46.579 "trtype": "TCP", 00:22:46.579 "adrfam": "IPv4", 00:22:46.579 "traddr": "10.0.0.1", 00:22:46.579 "trsvcid": "40468" 00:22:46.579 }, 00:22:46.579 "auth": { 00:22:46.579 "state": "completed", 00:22:46.579 "digest": "sha256", 00:22:46.579 "dhgroup": "ffdhe3072" 00:22:46.579 } 00:22:46.579 } 00:22:46.579 ]' 00:22:46.579 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.579 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:46.579 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.579 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:46.579 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.579 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.579 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.579 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.841 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:22:46.841 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:22:47.783 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:47.784 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:48.045 00:22:48.045 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:48.045 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.045 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.306 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.306 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.306 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.306 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.306 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.306 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.306 { 00:22:48.306 "cntlid": 23, 00:22:48.306 "qid": 0, 00:22:48.306 "state": "enabled", 00:22:48.306 "thread": "nvmf_tgt_poll_group_000", 00:22:48.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:48.306 "listen_address": { 00:22:48.306 "trtype": "TCP", 00:22:48.306 "adrfam": "IPv4", 00:22:48.306 "traddr": "10.0.0.2", 00:22:48.306 "trsvcid": "4420" 00:22:48.306 }, 00:22:48.306 "peer_address": { 00:22:48.306 "trtype": "TCP", 00:22:48.306 "adrfam": "IPv4", 00:22:48.306 "traddr": "10.0.0.1", 00:22:48.306 "trsvcid": "40506" 00:22:48.306 }, 00:22:48.306 "auth": { 00:22:48.306 "state": "completed", 00:22:48.306 "digest": "sha256", 00:22:48.306 "dhgroup": "ffdhe3072" 00:22:48.306 } 00:22:48.306 } 00:22:48.306 ]' 00:22:48.306 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.306 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:48.306 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.306 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:48.306 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.306 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.306 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.306 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.567 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:22:48.567 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:22:49.510 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.510 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:49.510 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.510 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.510 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.510 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:49.510 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:49.510 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:49.510 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:49.510 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:22:49.510 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.510 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:49.510 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:49.510 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:49.510 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.510 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.510 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.510 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.510 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.510 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.510 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.510 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:49.770 00:22:49.770 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.770 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:49.770 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.031 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.031 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.031 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.031 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.031 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.031 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:50.031 { 00:22:50.031 "cntlid": 25, 00:22:50.031 "qid": 0, 00:22:50.031 "state": "enabled", 00:22:50.031 "thread": "nvmf_tgt_poll_group_000", 00:22:50.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:50.031 "listen_address": { 00:22:50.031 "trtype": "TCP", 00:22:50.031 "adrfam": "IPv4", 00:22:50.031 "traddr": "10.0.0.2", 00:22:50.031 "trsvcid": "4420" 00:22:50.031 }, 00:22:50.031 "peer_address": { 00:22:50.031 "trtype": "TCP", 00:22:50.031 "adrfam": "IPv4", 00:22:50.031 "traddr": "10.0.0.1", 00:22:50.031 "trsvcid": "40520" 00:22:50.031 }, 00:22:50.031 "auth": { 00:22:50.031 "state": "completed", 00:22:50.031 "digest": "sha256", 00:22:50.031 "dhgroup": "ffdhe4096" 00:22:50.031 } 00:22:50.031 } 00:22:50.031 ]' 00:22:50.031 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:50.031 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:50.031 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:50.031 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:50.031 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:50.031 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.031 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.031 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.293 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:22:50.293 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:51.233 14:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:51.493 00:22:51.493 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.493 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.493 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.752 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.752 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.752 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.752 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.752 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.752 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:51.752 { 00:22:51.752 "cntlid": 27, 00:22:51.752 "qid": 0, 00:22:51.752 "state": "enabled", 00:22:51.752 "thread": "nvmf_tgt_poll_group_000", 00:22:51.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:51.752 "listen_address": { 00:22:51.752 "trtype": "TCP", 00:22:51.752 "adrfam": "IPv4", 00:22:51.752 "traddr": "10.0.0.2", 00:22:51.752 "trsvcid": "4420" 00:22:51.752 }, 00:22:51.752 "peer_address": { 00:22:51.752 "trtype": "TCP", 00:22:51.752 "adrfam": "IPv4", 00:22:51.752 "traddr": "10.0.0.1", 00:22:51.752 "trsvcid": "58070" 00:22:51.752 }, 00:22:51.752 "auth": { 00:22:51.752 "state": "completed", 00:22:51.752 "digest": "sha256", 00:22:51.752 "dhgroup": "ffdhe4096" 00:22:51.752 } 00:22:51.752 } 00:22:51.752 ]' 00:22:51.752 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:51.752 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:51.752 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:51.752 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:51.752 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:51.752 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.752 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.752 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.012 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:22:52.012 14:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.953 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:53.213 00:22:53.213 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.213 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.213 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.473 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.473 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.473 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.473 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.473 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.473 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.473 { 00:22:53.473 "cntlid": 29, 00:22:53.473 "qid": 0, 00:22:53.473 "state": "enabled", 00:22:53.473 "thread": "nvmf_tgt_poll_group_000", 00:22:53.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:53.473 "listen_address": { 00:22:53.473 "trtype": "TCP", 00:22:53.473 "adrfam": "IPv4", 00:22:53.473 "traddr": "10.0.0.2", 00:22:53.473 "trsvcid": "4420" 00:22:53.473 }, 00:22:53.473 "peer_address": { 00:22:53.473 "trtype": "TCP", 00:22:53.473 "adrfam": "IPv4", 00:22:53.473 "traddr": "10.0.0.1", 00:22:53.473 "trsvcid": "58102" 00:22:53.473 }, 00:22:53.473 "auth": { 00:22:53.473 "state": "completed", 00:22:53.473 "digest": "sha256", 00:22:53.473 "dhgroup": "ffdhe4096" 00:22:53.473 } 00:22:53.473 } 00:22:53.473 ]' 00:22:53.473 14:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.473 14:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:53.473 14:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.473 14:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:53.473 14:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:53.473 14:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.473 14:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.473 14:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.733 14:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:22:53.733 14:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:54.672 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:54.932 00:22:54.932 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:54.932 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:54.932 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.191 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.192 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.192 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.192 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.192 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.192 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:55.192 { 00:22:55.192 "cntlid": 31, 00:22:55.192 "qid": 0, 00:22:55.192 "state": "enabled", 00:22:55.192 "thread": "nvmf_tgt_poll_group_000", 00:22:55.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:55.192 "listen_address": { 00:22:55.192 "trtype": "TCP", 00:22:55.192 "adrfam": "IPv4", 00:22:55.192 "traddr": "10.0.0.2", 00:22:55.192 "trsvcid": "4420" 00:22:55.192 }, 00:22:55.192 "peer_address": { 00:22:55.192 "trtype": "TCP", 00:22:55.192 "adrfam": "IPv4", 00:22:55.192 "traddr": "10.0.0.1", 00:22:55.192 "trsvcid": "58126" 00:22:55.192 }, 00:22:55.192 "auth": { 00:22:55.192 "state": "completed", 00:22:55.192 "digest": "sha256", 00:22:55.192 "dhgroup": "ffdhe4096" 00:22:55.192 } 00:22:55.192 } 00:22:55.192 ]' 00:22:55.192 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:55.192 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:55.192 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:55.192 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:55.192 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:55.192 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.192 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.192 14:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.452 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:22:55.452 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:22:56.395 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:56.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:56.395 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:56.395 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.395 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.395 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.395 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:56.395 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:56.395 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:56.395 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:56.395 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:22:56.395 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:56.395 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:56.395 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:56.395 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:56.395 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.395 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.395 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.395 14:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.395 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.395 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.395 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.395 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.656 00:22:56.918 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:56.918 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:56.918 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.918 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.918 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.918 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.918 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.918 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.918 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:56.918 { 00:22:56.918 "cntlid": 33, 00:22:56.918 "qid": 0, 00:22:56.918 "state": "enabled", 00:22:56.918 "thread": "nvmf_tgt_poll_group_000", 00:22:56.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:56.918 "listen_address": { 00:22:56.918 "trtype": "TCP", 00:22:56.918 "adrfam": "IPv4", 00:22:56.918 "traddr": "10.0.0.2", 00:22:56.918 "trsvcid": "4420" 00:22:56.918 }, 00:22:56.918 "peer_address": { 00:22:56.918 "trtype": "TCP", 00:22:56.918 "adrfam": "IPv4", 00:22:56.918 "traddr": "10.0.0.1", 00:22:56.918 "trsvcid": "58158" 00:22:56.918 }, 00:22:56.918 "auth": { 00:22:56.918 "state": "completed", 00:22:56.918 "digest": "sha256", 00:22:56.918 "dhgroup": "ffdhe6144" 00:22:56.918 } 00:22:56.918 } 00:22:56.918 ]' 00:22:56.918 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:56.918 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:57.179 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:57.179 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:57.179 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:57.179 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.179 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.179 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.440 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:22:57.440 14:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:22:58.011 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.011 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:58.011 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.011 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.011 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.011 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:58.011 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:58.011 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:58.272 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:22:58.272 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:58.272 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:58.272 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:58.272 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:58.272 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.272 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.272 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.272 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.272 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.272 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.272 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.272 14:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.532 00:22:58.793 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:58.793 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.793 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:58.793 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.793 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.793 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.793 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.793 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.793 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:58.793 { 00:22:58.793 "cntlid": 35, 00:22:58.793 "qid": 0, 00:22:58.793 "state": "enabled", 00:22:58.793 "thread": "nvmf_tgt_poll_group_000", 00:22:58.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:22:58.793 "listen_address": { 00:22:58.793 "trtype": "TCP", 00:22:58.793 "adrfam": "IPv4", 00:22:58.793 "traddr": "10.0.0.2", 00:22:58.793 "trsvcid": "4420" 00:22:58.793 }, 00:22:58.793 "peer_address": { 00:22:58.793 "trtype": "TCP", 00:22:58.793 "adrfam": "IPv4", 00:22:58.793 "traddr": "10.0.0.1", 00:22:58.793 "trsvcid": "58188" 00:22:58.793 }, 00:22:58.793 "auth": { 00:22:58.793 "state": "completed", 00:22:58.793 "digest": "sha256", 00:22:58.793 "dhgroup": "ffdhe6144" 00:22:58.793 } 00:22:58.793 } 00:22:58.793 ]' 00:22:58.793 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:58.793 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:58.793 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:59.055 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:59.055 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.055 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.055 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.055 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.055 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:22:59.055 14:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:22:59.996 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.996 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:59.996 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.996 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.996 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.996 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:59.996 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:59.996 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:59.996 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:22:59.996 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:59.996 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:59.996 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:59.996 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:59.996 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.996 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.996 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.996 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.257 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.257 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.257 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.257 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.517 00:23:00.517 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:00.517 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:00.517 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.778 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.778 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.778 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.778 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.778 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.778 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:00.778 { 00:23:00.778 "cntlid": 37, 00:23:00.778 "qid": 0, 00:23:00.778 "state": "enabled", 00:23:00.778 "thread": "nvmf_tgt_poll_group_000", 00:23:00.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:00.778 "listen_address": { 00:23:00.779 "trtype": "TCP", 00:23:00.779 "adrfam": "IPv4", 00:23:00.779 "traddr": "10.0.0.2", 00:23:00.779 "trsvcid": "4420" 00:23:00.779 }, 00:23:00.779 "peer_address": { 00:23:00.779 "trtype": "TCP", 00:23:00.779 "adrfam": "IPv4", 00:23:00.779 "traddr": "10.0.0.1", 00:23:00.779 "trsvcid": "58220" 00:23:00.779 }, 00:23:00.779 "auth": { 00:23:00.779 "state": "completed", 00:23:00.779 "digest": "sha256", 00:23:00.779 "dhgroup": "ffdhe6144" 00:23:00.779 } 00:23:00.779 } 00:23:00.779 ]' 00:23:00.779 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:00.779 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:00.779 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:00.779 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:00.779 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:00.779 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.779 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.779 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.040 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:23:01.040 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:01.982 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:02.243 00:23:02.243 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:02.243 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:02.243 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.504 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.504 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.504 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.504 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.504 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.504 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:02.504 { 00:23:02.504 "cntlid": 39, 00:23:02.504 "qid": 0, 00:23:02.504 "state": "enabled", 00:23:02.504 "thread": "nvmf_tgt_poll_group_000", 00:23:02.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:02.504 "listen_address": { 00:23:02.504 "trtype": "TCP", 00:23:02.504 "adrfam": "IPv4", 00:23:02.504 "traddr": "10.0.0.2", 00:23:02.504 "trsvcid": "4420" 00:23:02.504 }, 00:23:02.504 "peer_address": { 00:23:02.504 "trtype": "TCP", 00:23:02.504 "adrfam": "IPv4", 00:23:02.504 "traddr": "10.0.0.1", 00:23:02.504 "trsvcid": "47086" 00:23:02.504 }, 00:23:02.504 "auth": { 00:23:02.504 "state": "completed", 00:23:02.504 "digest": "sha256", 00:23:02.504 "dhgroup": "ffdhe6144" 00:23:02.504 } 00:23:02.504 } 00:23:02.504 ]' 00:23:02.504 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:02.504 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:02.504 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:02.504 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:02.504 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:02.765 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.765 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.765 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.765 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:23:02.765 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.707 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.304 00:23:04.304 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:04.304 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:04.304 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.564 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.564 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.564 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.564 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.564 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.564 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:04.564 { 00:23:04.564 "cntlid": 41, 00:23:04.564 "qid": 0, 00:23:04.564 "state": "enabled", 00:23:04.564 "thread": "nvmf_tgt_poll_group_000", 00:23:04.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:04.564 "listen_address": { 00:23:04.564 "trtype": "TCP", 00:23:04.564 "adrfam": "IPv4", 00:23:04.564 "traddr": "10.0.0.2", 00:23:04.564 "trsvcid": "4420" 00:23:04.564 }, 00:23:04.564 "peer_address": { 00:23:04.564 "trtype": "TCP", 00:23:04.564 "adrfam": "IPv4", 00:23:04.564 "traddr": "10.0.0.1", 00:23:04.564 "trsvcid": "47100" 00:23:04.564 }, 00:23:04.564 "auth": { 00:23:04.564 "state": "completed", 00:23:04.565 "digest": "sha256", 00:23:04.565 "dhgroup": "ffdhe8192" 00:23:04.565 } 00:23:04.565 } 00:23:04.565 ]' 00:23:04.565 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:04.565 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:04.565 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:04.565 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:04.565 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:04.565 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.565 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.565 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.828 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:23:04.828 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.775 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.346 00:23:06.346 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:06.346 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:06.346 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.346 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.347 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.347 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.347 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.347 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.347 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:06.347 { 00:23:06.347 "cntlid": 43, 00:23:06.347 "qid": 0, 00:23:06.347 "state": "enabled", 00:23:06.347 "thread": "nvmf_tgt_poll_group_000", 00:23:06.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:06.347 "listen_address": { 00:23:06.347 "trtype": "TCP", 00:23:06.347 "adrfam": "IPv4", 00:23:06.347 "traddr": "10.0.0.2", 00:23:06.347 "trsvcid": "4420" 00:23:06.347 }, 00:23:06.347 "peer_address": { 00:23:06.347 "trtype": "TCP", 00:23:06.347 "adrfam": "IPv4", 00:23:06.347 "traddr": "10.0.0.1", 00:23:06.347 "trsvcid": "47132" 00:23:06.347 }, 00:23:06.347 "auth": { 00:23:06.347 "state": "completed", 00:23:06.347 "digest": "sha256", 00:23:06.347 "dhgroup": "ffdhe8192" 00:23:06.347 } 00:23:06.347 } 00:23:06.347 ]' 00:23:06.608 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:06.608 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:06.608 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:06.608 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:06.608 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:06.608 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.608 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.608 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.869 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:23:06.869 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:23:07.440 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.700 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:08.271 00:23:08.271 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:08.271 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.271 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:08.531 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.531 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.531 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.531 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.531 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.531 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:08.531 { 00:23:08.531 "cntlid": 45, 00:23:08.531 "qid": 0, 00:23:08.531 "state": "enabled", 00:23:08.531 "thread": "nvmf_tgt_poll_group_000", 00:23:08.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:08.531 "listen_address": { 00:23:08.531 "trtype": "TCP", 00:23:08.531 "adrfam": "IPv4", 00:23:08.531 "traddr": "10.0.0.2", 00:23:08.531 "trsvcid": "4420" 00:23:08.531 }, 00:23:08.531 "peer_address": { 00:23:08.531 "trtype": "TCP", 00:23:08.531 "adrfam": "IPv4", 00:23:08.531 "traddr": "10.0.0.1", 00:23:08.531 "trsvcid": "47170" 00:23:08.531 }, 00:23:08.531 "auth": { 00:23:08.531 "state": "completed", 00:23:08.531 "digest": "sha256", 00:23:08.531 "dhgroup": "ffdhe8192" 00:23:08.531 } 00:23:08.531 } 00:23:08.531 ]' 00:23:08.531 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:08.531 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:08.531 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:08.531 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:08.531 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:08.531 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:08.531 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.531 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.792 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:23:08.792 14:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:09.734 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:10.306 00:23:10.307 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:10.307 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:10.307 14:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.567 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.567 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.567 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.567 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.567 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.567 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:10.567 { 00:23:10.567 "cntlid": 47, 00:23:10.567 "qid": 0, 00:23:10.567 "state": "enabled", 00:23:10.567 "thread": "nvmf_tgt_poll_group_000", 00:23:10.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:10.567 "listen_address": { 00:23:10.567 "trtype": "TCP", 00:23:10.567 "adrfam": "IPv4", 00:23:10.567 "traddr": "10.0.0.2", 00:23:10.567 "trsvcid": "4420" 00:23:10.567 }, 00:23:10.567 "peer_address": { 00:23:10.567 "trtype": "TCP", 00:23:10.567 "adrfam": "IPv4", 00:23:10.567 "traddr": "10.0.0.1", 00:23:10.567 "trsvcid": "47214" 00:23:10.567 }, 00:23:10.567 "auth": { 00:23:10.567 "state": "completed", 00:23:10.567 "digest": "sha256", 00:23:10.567 "dhgroup": "ffdhe8192" 00:23:10.567 } 00:23:10.567 } 00:23:10.567 ]' 00:23:10.567 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:10.567 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:10.567 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:10.567 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:10.567 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:10.567 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.567 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.567 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.827 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:23:10.827 14:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:23:11.398 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.398 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:11.398 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.398 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.398 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.398 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:23:11.398 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:11.398 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:11.398 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:11.398 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:11.658 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:23:11.658 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:11.658 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:11.658 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:11.658 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:11.658 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.658 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:11.658 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.658 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.658 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.658 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:11.658 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:11.658 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:11.919 00:23:11.919 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:11.919 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:11.919 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.179 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.179 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:12.179 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.179 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.179 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.179 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:12.179 { 00:23:12.179 "cntlid": 49, 00:23:12.179 "qid": 0, 00:23:12.179 "state": "enabled", 00:23:12.180 "thread": "nvmf_tgt_poll_group_000", 00:23:12.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:12.180 "listen_address": { 00:23:12.180 "trtype": "TCP", 00:23:12.180 "adrfam": "IPv4", 00:23:12.180 "traddr": "10.0.0.2", 00:23:12.180 "trsvcid": "4420" 00:23:12.180 }, 00:23:12.180 "peer_address": { 00:23:12.180 "trtype": "TCP", 00:23:12.180 "adrfam": "IPv4", 00:23:12.180 "traddr": "10.0.0.1", 00:23:12.180 "trsvcid": "54466" 00:23:12.180 }, 00:23:12.180 "auth": { 00:23:12.180 "state": "completed", 00:23:12.180 "digest": "sha384", 00:23:12.180 "dhgroup": "null" 00:23:12.180 } 00:23:12.180 } 00:23:12.180 ]' 00:23:12.180 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:12.180 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:12.180 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:12.180 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:12.180 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:12.180 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:12.180 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.180 14:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.441 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:23:12.442 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.383 14:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.643 00:23:13.643 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:13.643 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:13.643 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.903 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.904 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.904 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.904 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.904 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.904 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:13.904 { 00:23:13.904 "cntlid": 51, 00:23:13.904 "qid": 0, 00:23:13.904 "state": "enabled", 00:23:13.904 "thread": "nvmf_tgt_poll_group_000", 00:23:13.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:13.904 "listen_address": { 00:23:13.904 "trtype": "TCP", 00:23:13.904 "adrfam": "IPv4", 00:23:13.904 "traddr": "10.0.0.2", 00:23:13.904 "trsvcid": "4420" 00:23:13.904 }, 00:23:13.904 "peer_address": { 00:23:13.904 "trtype": "TCP", 00:23:13.904 "adrfam": "IPv4", 00:23:13.904 "traddr": "10.0.0.1", 00:23:13.904 "trsvcid": "54506" 00:23:13.904 }, 00:23:13.904 "auth": { 00:23:13.904 "state": "completed", 00:23:13.904 "digest": "sha384", 00:23:13.904 "dhgroup": "null" 00:23:13.904 } 00:23:13.904 } 00:23:13.904 ]' 00:23:13.904 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:13.904 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:13.904 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:13.904 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:13.904 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:13.904 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.904 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.904 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.164 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:23:14.165 14:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:15.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.105 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.364 00:23:15.364 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:15.364 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:15.364 14:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.624 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.624 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.624 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.624 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.624 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.624 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:15.624 { 00:23:15.624 "cntlid": 53, 00:23:15.624 "qid": 0, 00:23:15.624 "state": "enabled", 00:23:15.624 "thread": "nvmf_tgt_poll_group_000", 00:23:15.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:15.624 "listen_address": { 00:23:15.624 "trtype": "TCP", 00:23:15.624 "adrfam": "IPv4", 00:23:15.624 "traddr": "10.0.0.2", 00:23:15.624 "trsvcid": "4420" 00:23:15.624 }, 00:23:15.624 "peer_address": { 00:23:15.624 "trtype": "TCP", 00:23:15.624 "adrfam": "IPv4", 00:23:15.624 "traddr": "10.0.0.1", 00:23:15.624 "trsvcid": "54530" 00:23:15.624 }, 00:23:15.624 "auth": { 00:23:15.624 "state": "completed", 00:23:15.624 "digest": "sha384", 00:23:15.624 "dhgroup": "null" 00:23:15.624 } 00:23:15.624 } 00:23:15.624 ]' 00:23:15.624 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:15.624 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:15.624 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:15.624 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:15.624 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:15.624 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.624 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.624 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.884 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:23:15.884 14:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:16.824 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:17.084 00:23:17.084 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:17.084 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:17.084 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.344 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.344 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.344 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.344 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.344 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.344 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:17.344 { 00:23:17.344 "cntlid": 55, 00:23:17.344 "qid": 0, 00:23:17.344 "state": "enabled", 00:23:17.344 "thread": "nvmf_tgt_poll_group_000", 00:23:17.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:17.344 "listen_address": { 00:23:17.344 "trtype": "TCP", 00:23:17.344 "adrfam": "IPv4", 00:23:17.344 "traddr": "10.0.0.2", 00:23:17.344 "trsvcid": "4420" 00:23:17.344 }, 00:23:17.344 "peer_address": { 00:23:17.344 "trtype": "TCP", 00:23:17.344 "adrfam": "IPv4", 00:23:17.344 "traddr": "10.0.0.1", 00:23:17.344 "trsvcid": "54550" 00:23:17.344 }, 00:23:17.344 "auth": { 00:23:17.344 "state": "completed", 00:23:17.344 "digest": "sha384", 00:23:17.344 "dhgroup": "null" 00:23:17.344 } 00:23:17.344 } 00:23:17.344 ]' 00:23:17.344 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:17.344 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:17.344 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:17.344 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:17.344 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:17.344 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.344 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.344 14:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.603 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:23:17.603 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:23:18.541 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.541 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:18.541 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.541 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.541 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.541 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:18.541 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:18.541 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:18.541 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:18.541 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:23:18.541 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:18.541 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:18.541 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:18.541 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:18.541 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:18.541 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.541 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.541 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.541 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.541 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.541 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.541 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.801 00:23:18.801 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:18.801 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:18.801 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.061 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.061 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.061 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.061 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.061 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.061 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:19.061 { 00:23:19.061 "cntlid": 57, 00:23:19.061 "qid": 0, 00:23:19.061 "state": "enabled", 00:23:19.061 "thread": "nvmf_tgt_poll_group_000", 00:23:19.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:19.061 "listen_address": { 00:23:19.061 "trtype": "TCP", 00:23:19.061 "adrfam": "IPv4", 00:23:19.061 "traddr": "10.0.0.2", 00:23:19.061 "trsvcid": "4420" 00:23:19.061 }, 00:23:19.061 "peer_address": { 00:23:19.061 "trtype": "TCP", 00:23:19.061 "adrfam": "IPv4", 00:23:19.061 "traddr": "10.0.0.1", 00:23:19.061 "trsvcid": "54574" 00:23:19.061 }, 00:23:19.061 "auth": { 00:23:19.061 "state": "completed", 00:23:19.061 "digest": "sha384", 00:23:19.061 "dhgroup": "ffdhe2048" 00:23:19.061 } 00:23:19.061 } 00:23:19.061 ]' 00:23:19.061 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:19.061 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:19.061 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:19.061 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:19.061 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:19.061 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.061 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.061 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.321 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:23:19.322 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:23:20.260 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.261 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.520 00:23:20.520 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:20.520 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:20.520 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.780 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.780 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.780 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.780 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.780 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.780 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:20.780 { 00:23:20.780 "cntlid": 59, 00:23:20.780 "qid": 0, 00:23:20.780 "state": "enabled", 00:23:20.780 "thread": "nvmf_tgt_poll_group_000", 00:23:20.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:20.780 "listen_address": { 00:23:20.780 "trtype": "TCP", 00:23:20.780 "adrfam": "IPv4", 00:23:20.780 "traddr": "10.0.0.2", 00:23:20.780 "trsvcid": "4420" 00:23:20.780 }, 00:23:20.780 "peer_address": { 00:23:20.780 "trtype": "TCP", 00:23:20.780 "adrfam": "IPv4", 00:23:20.780 "traddr": "10.0.0.1", 00:23:20.780 "trsvcid": "54602" 00:23:20.780 }, 00:23:20.780 "auth": { 00:23:20.780 "state": "completed", 00:23:20.780 "digest": "sha384", 00:23:20.780 "dhgroup": "ffdhe2048" 00:23:20.780 } 00:23:20.780 } 00:23:20.780 ]' 00:23:20.780 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:20.780 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:20.780 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:20.780 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:20.780 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:20.780 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.780 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.780 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.041 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:23:21.041 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:23:21.613 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.613 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:21.613 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.613 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.873 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.873 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:21.873 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:21.873 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:21.873 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:23:21.873 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:21.873 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:21.873 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:21.873 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:21.873 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:21.873 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.873 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.873 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.873 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.873 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.873 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.873 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.134 00:23:22.134 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:22.134 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:22.134 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.395 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.395 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.395 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.395 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.395 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.395 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:22.395 { 00:23:22.395 "cntlid": 61, 00:23:22.395 "qid": 0, 00:23:22.395 "state": "enabled", 00:23:22.395 "thread": "nvmf_tgt_poll_group_000", 00:23:22.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:22.395 "listen_address": { 00:23:22.395 "trtype": "TCP", 00:23:22.395 "adrfam": "IPv4", 00:23:22.395 "traddr": "10.0.0.2", 00:23:22.395 "trsvcid": "4420" 00:23:22.395 }, 00:23:22.395 "peer_address": { 00:23:22.395 "trtype": "TCP", 00:23:22.395 "adrfam": "IPv4", 00:23:22.395 "traddr": "10.0.0.1", 00:23:22.395 "trsvcid": "43588" 00:23:22.395 }, 00:23:22.395 "auth": { 00:23:22.395 "state": "completed", 00:23:22.395 "digest": "sha384", 00:23:22.395 "dhgroup": "ffdhe2048" 00:23:22.395 } 00:23:22.395 } 00:23:22.395 ]' 00:23:22.395 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:22.395 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:22.395 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:22.395 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:22.395 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:22.395 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.395 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.395 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.656 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:23:22.656 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:23:23.597 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:23.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:23.598 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:23.857 00:23:23.857 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:23.857 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:23.857 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.117 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.117 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:24.117 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.117 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.117 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.117 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:24.117 { 00:23:24.117 "cntlid": 63, 00:23:24.117 "qid": 0, 00:23:24.117 "state": "enabled", 00:23:24.117 "thread": "nvmf_tgt_poll_group_000", 00:23:24.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:24.117 "listen_address": { 00:23:24.117 "trtype": "TCP", 00:23:24.117 "adrfam": "IPv4", 00:23:24.117 "traddr": "10.0.0.2", 00:23:24.117 "trsvcid": "4420" 00:23:24.117 }, 00:23:24.117 "peer_address": { 00:23:24.117 "trtype": "TCP", 00:23:24.117 "adrfam": "IPv4", 00:23:24.117 "traddr": "10.0.0.1", 00:23:24.117 "trsvcid": "43612" 00:23:24.117 }, 00:23:24.117 "auth": { 00:23:24.117 "state": "completed", 00:23:24.117 "digest": "sha384", 00:23:24.117 "dhgroup": "ffdhe2048" 00:23:24.117 } 00:23:24.117 } 00:23:24.117 ]' 00:23:24.117 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:24.117 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:24.117 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:24.117 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:24.117 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:24.117 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:24.117 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:24.117 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:24.381 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:23:24.381 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:23:25.322 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:25.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:25.322 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:25.322 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.322 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.322 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.322 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:25.322 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:25.322 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:25.322 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:25.322 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:23:25.322 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:25.322 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:25.322 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:25.323 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:25.323 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:25.323 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.323 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.323 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.323 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.323 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.323 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.323 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.583 00:23:25.583 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:25.583 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:25.583 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.844 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.844 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:25.844 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.844 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.844 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.844 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:25.844 { 00:23:25.844 "cntlid": 65, 00:23:25.844 "qid": 0, 00:23:25.844 "state": "enabled", 00:23:25.844 "thread": "nvmf_tgt_poll_group_000", 00:23:25.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:25.844 "listen_address": { 00:23:25.844 "trtype": "TCP", 00:23:25.844 "adrfam": "IPv4", 00:23:25.844 "traddr": "10.0.0.2", 00:23:25.844 "trsvcid": "4420" 00:23:25.844 }, 00:23:25.844 "peer_address": { 00:23:25.844 "trtype": "TCP", 00:23:25.844 "adrfam": "IPv4", 00:23:25.844 "traddr": "10.0.0.1", 00:23:25.844 "trsvcid": "43642" 00:23:25.844 }, 00:23:25.844 "auth": { 00:23:25.844 "state": "completed", 00:23:25.844 "digest": "sha384", 00:23:25.844 "dhgroup": "ffdhe3072" 00:23:25.844 } 00:23:25.844 } 00:23:25.844 ]' 00:23:25.844 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:25.844 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:25.844 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:25.844 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:25.844 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:25.844 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:25.844 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:25.844 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:26.103 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:23:26.104 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.040 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.300 00:23:27.300 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:27.300 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.300 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:27.560 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.560 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:27.560 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.560 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.560 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.560 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:27.560 { 00:23:27.560 "cntlid": 67, 00:23:27.560 "qid": 0, 00:23:27.560 "state": "enabled", 00:23:27.560 "thread": "nvmf_tgt_poll_group_000", 00:23:27.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:27.560 "listen_address": { 00:23:27.560 "trtype": "TCP", 00:23:27.560 "adrfam": "IPv4", 00:23:27.560 "traddr": "10.0.0.2", 00:23:27.560 "trsvcid": "4420" 00:23:27.560 }, 00:23:27.560 "peer_address": { 00:23:27.560 "trtype": "TCP", 00:23:27.560 "adrfam": "IPv4", 00:23:27.560 "traddr": "10.0.0.1", 00:23:27.560 "trsvcid": "43662" 00:23:27.560 }, 00:23:27.560 "auth": { 00:23:27.560 "state": "completed", 00:23:27.560 "digest": "sha384", 00:23:27.560 "dhgroup": "ffdhe3072" 00:23:27.560 } 00:23:27.560 } 00:23:27.560 ]' 00:23:27.560 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:27.560 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:27.560 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:27.560 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:27.560 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:27.560 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:27.560 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:27.560 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.820 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:23:27.821 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:28.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.762 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.023 00:23:29.023 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:29.023 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:29.023 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.283 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.283 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:29.283 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.283 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.283 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.283 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:29.283 { 00:23:29.283 "cntlid": 69, 00:23:29.283 "qid": 0, 00:23:29.283 "state": "enabled", 00:23:29.283 "thread": "nvmf_tgt_poll_group_000", 00:23:29.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:29.283 "listen_address": { 00:23:29.283 "trtype": "TCP", 00:23:29.283 "adrfam": "IPv4", 00:23:29.283 "traddr": "10.0.0.2", 00:23:29.283 "trsvcid": "4420" 00:23:29.283 }, 00:23:29.283 "peer_address": { 00:23:29.283 "trtype": "TCP", 00:23:29.283 "adrfam": "IPv4", 00:23:29.283 "traddr": "10.0.0.1", 00:23:29.283 "trsvcid": "43688" 00:23:29.283 }, 00:23:29.283 "auth": { 00:23:29.283 "state": "completed", 00:23:29.283 "digest": "sha384", 00:23:29.283 "dhgroup": "ffdhe3072" 00:23:29.283 } 00:23:29.283 } 00:23:29.283 ]' 00:23:29.283 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:29.283 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:29.283 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:29.283 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:29.283 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:29.283 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:29.283 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:29.283 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:29.544 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:23:29.544 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:23:30.487 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:30.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:30.487 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:30.487 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.487 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.487 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.487 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:30.487 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:30.487 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:30.487 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:23:30.487 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:30.487 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:30.487 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:30.487 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:30.487 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:30.487 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:30.487 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.487 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.487 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.488 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:30.488 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:30.488 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:30.748 00:23:30.748 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:30.748 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:30.748 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:31.009 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.009 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:31.009 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.009 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.009 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.009 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:31.009 { 00:23:31.009 "cntlid": 71, 00:23:31.009 "qid": 0, 00:23:31.009 "state": "enabled", 00:23:31.009 "thread": "nvmf_tgt_poll_group_000", 00:23:31.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:31.009 "listen_address": { 00:23:31.009 "trtype": "TCP", 00:23:31.009 "adrfam": "IPv4", 00:23:31.009 "traddr": "10.0.0.2", 00:23:31.009 "trsvcid": "4420" 00:23:31.009 }, 00:23:31.009 "peer_address": { 00:23:31.009 "trtype": "TCP", 00:23:31.009 "adrfam": "IPv4", 00:23:31.009 "traddr": "10.0.0.1", 00:23:31.009 "trsvcid": "43706" 00:23:31.009 }, 00:23:31.009 "auth": { 00:23:31.009 "state": "completed", 00:23:31.009 "digest": "sha384", 00:23:31.009 "dhgroup": "ffdhe3072" 00:23:31.009 } 00:23:31.009 } 00:23:31.009 ]' 00:23:31.009 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:31.009 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:31.009 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:31.009 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:31.009 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:31.009 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:31.009 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:31.009 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:31.270 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:23:31.270 14:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:23:31.842 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:32.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.104 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.365 00:23:32.365 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:32.365 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:32.365 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.626 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.626 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:32.626 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.626 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.626 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.626 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:32.626 { 00:23:32.626 "cntlid": 73, 00:23:32.626 "qid": 0, 00:23:32.626 "state": "enabled", 00:23:32.626 "thread": "nvmf_tgt_poll_group_000", 00:23:32.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:32.626 "listen_address": { 00:23:32.626 "trtype": "TCP", 00:23:32.626 "adrfam": "IPv4", 00:23:32.626 "traddr": "10.0.0.2", 00:23:32.626 "trsvcid": "4420" 00:23:32.626 }, 00:23:32.626 "peer_address": { 00:23:32.626 "trtype": "TCP", 00:23:32.626 "adrfam": "IPv4", 00:23:32.626 "traddr": "10.0.0.1", 00:23:32.626 "trsvcid": "45404" 00:23:32.626 }, 00:23:32.626 "auth": { 00:23:32.626 "state": "completed", 00:23:32.626 "digest": "sha384", 00:23:32.626 "dhgroup": "ffdhe4096" 00:23:32.626 } 00:23:32.626 } 00:23:32.626 ]' 00:23:32.626 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:32.626 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:32.626 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:32.626 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:32.626 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:32.887 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:32.887 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:32.887 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:32.887 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:23:32.887 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:33.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.829 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.089 00:23:34.090 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:34.090 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:34.090 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:34.351 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.351 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:34.351 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.351 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.351 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.351 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:34.351 { 00:23:34.351 "cntlid": 75, 00:23:34.351 "qid": 0, 00:23:34.351 "state": "enabled", 00:23:34.351 "thread": "nvmf_tgt_poll_group_000", 00:23:34.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:34.351 "listen_address": { 00:23:34.351 "trtype": "TCP", 00:23:34.351 "adrfam": "IPv4", 00:23:34.351 "traddr": "10.0.0.2", 00:23:34.351 "trsvcid": "4420" 00:23:34.351 }, 00:23:34.351 "peer_address": { 00:23:34.351 "trtype": "TCP", 00:23:34.351 "adrfam": "IPv4", 00:23:34.351 "traddr": "10.0.0.1", 00:23:34.351 "trsvcid": "45422" 00:23:34.351 }, 00:23:34.351 "auth": { 00:23:34.351 "state": "completed", 00:23:34.351 "digest": "sha384", 00:23:34.351 "dhgroup": "ffdhe4096" 00:23:34.351 } 00:23:34.351 } 00:23:34.351 ]' 00:23:34.351 14:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:34.351 14:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:34.351 14:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:34.612 14:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:34.612 14:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:34.612 14:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:34.612 14:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:34.612 14:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:34.612 14:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:23:34.612 14:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:35.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.554 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.815 00:23:35.815 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:35.815 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:35.815 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:36.075 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.075 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.075 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.075 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.075 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.075 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:36.075 { 00:23:36.075 "cntlid": 77, 00:23:36.075 "qid": 0, 00:23:36.075 "state": "enabled", 00:23:36.075 "thread": "nvmf_tgt_poll_group_000", 00:23:36.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:36.075 "listen_address": { 00:23:36.075 "trtype": "TCP", 00:23:36.075 "adrfam": "IPv4", 00:23:36.075 "traddr": "10.0.0.2", 00:23:36.075 "trsvcid": "4420" 00:23:36.075 }, 00:23:36.075 "peer_address": { 00:23:36.075 "trtype": "TCP", 00:23:36.075 "adrfam": "IPv4", 00:23:36.075 "traddr": "10.0.0.1", 00:23:36.075 "trsvcid": "45444" 00:23:36.075 }, 00:23:36.075 "auth": { 00:23:36.075 "state": "completed", 00:23:36.075 "digest": "sha384", 00:23:36.075 "dhgroup": "ffdhe4096" 00:23:36.075 } 00:23:36.075 } 00:23:36.075 ]' 00:23:36.075 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:36.075 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:36.075 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:36.336 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:36.336 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:36.336 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.336 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.336 14:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:36.336 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:23:36.336 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:37.280 14:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:37.541 00:23:37.860 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:37.860 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:37.860 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:37.860 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.860 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:37.860 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.860 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.860 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.860 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:37.860 { 00:23:37.860 "cntlid": 79, 00:23:37.860 "qid": 0, 00:23:37.860 "state": "enabled", 00:23:37.860 "thread": "nvmf_tgt_poll_group_000", 00:23:37.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:37.860 "listen_address": { 00:23:37.860 "trtype": "TCP", 00:23:37.860 "adrfam": "IPv4", 00:23:37.860 "traddr": "10.0.0.2", 00:23:37.860 "trsvcid": "4420" 00:23:37.860 }, 00:23:37.860 "peer_address": { 00:23:37.860 "trtype": "TCP", 00:23:37.860 "adrfam": "IPv4", 00:23:37.860 "traddr": "10.0.0.1", 00:23:37.860 "trsvcid": "45476" 00:23:37.860 }, 00:23:37.860 "auth": { 00:23:37.860 "state": "completed", 00:23:37.860 "digest": "sha384", 00:23:37.860 "dhgroup": "ffdhe4096" 00:23:37.860 } 00:23:37.860 } 00:23:37.860 ]' 00:23:37.860 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:37.860 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:37.860 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:37.860 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:38.148 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:38.148 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:38.148 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:38.148 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:38.148 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:23:38.148 14:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:23:38.784 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:38.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:38.784 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:38.784 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.784 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.784 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.784 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:38.784 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:38.784 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:38.784 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:39.075 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:23:39.075 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:39.075 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:39.075 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:39.075 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:39.075 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:39.075 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:39.075 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.075 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.075 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.075 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:39.075 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:39.075 14:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:39.389 00:23:39.389 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:39.389 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:39.389 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:39.688 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.688 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:39.688 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.688 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.688 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.688 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:39.688 { 00:23:39.688 "cntlid": 81, 00:23:39.688 "qid": 0, 00:23:39.688 "state": "enabled", 00:23:39.688 "thread": "nvmf_tgt_poll_group_000", 00:23:39.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:39.688 "listen_address": { 00:23:39.688 "trtype": "TCP", 00:23:39.688 "adrfam": "IPv4", 00:23:39.688 "traddr": "10.0.0.2", 00:23:39.688 "trsvcid": "4420" 00:23:39.688 }, 00:23:39.688 "peer_address": { 00:23:39.688 "trtype": "TCP", 00:23:39.688 "adrfam": "IPv4", 00:23:39.688 "traddr": "10.0.0.1", 00:23:39.688 "trsvcid": "45496" 00:23:39.688 }, 00:23:39.688 "auth": { 00:23:39.688 "state": "completed", 00:23:39.688 "digest": "sha384", 00:23:39.688 "dhgroup": "ffdhe6144" 00:23:39.688 } 00:23:39.688 } 00:23:39.688 ]' 00:23:39.688 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:39.688 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:39.688 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:39.688 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:39.688 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:39.688 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:39.688 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:39.688 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:39.950 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:23:39.950 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:23:40.893 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:40.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:40.893 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:40.893 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.893 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.893 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.893 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:40.893 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:40.893 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:40.893 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:23:40.893 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:40.893 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:40.893 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:40.893 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:40.893 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:40.893 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.893 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.893 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.894 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.894 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.894 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.894 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.154 00:23:41.154 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:41.154 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:41.154 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:41.416 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.416 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:41.416 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.416 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.416 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.416 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:41.416 { 00:23:41.416 "cntlid": 83, 00:23:41.416 "qid": 0, 00:23:41.416 "state": "enabled", 00:23:41.416 "thread": "nvmf_tgt_poll_group_000", 00:23:41.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:41.416 "listen_address": { 00:23:41.416 "trtype": "TCP", 00:23:41.416 "adrfam": "IPv4", 00:23:41.416 "traddr": "10.0.0.2", 00:23:41.416 "trsvcid": "4420" 00:23:41.416 }, 00:23:41.416 "peer_address": { 00:23:41.416 "trtype": "TCP", 00:23:41.416 "adrfam": "IPv4", 00:23:41.416 "traddr": "10.0.0.1", 00:23:41.416 "trsvcid": "45536" 00:23:41.416 }, 00:23:41.416 "auth": { 00:23:41.416 "state": "completed", 00:23:41.416 "digest": "sha384", 00:23:41.416 "dhgroup": "ffdhe6144" 00:23:41.416 } 00:23:41.416 } 00:23:41.416 ]' 00:23:41.416 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:41.416 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:41.416 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:41.416 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:41.416 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:41.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:41.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:41.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:41.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:23:41.678 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:23:42.620 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:42.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:42.620 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:42.620 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.620 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.620 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.620 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:42.620 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:42.620 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:42.620 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:23:42.620 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:42.620 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:42.620 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:42.620 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:42.620 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:42.620 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.621 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.621 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.621 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.621 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.621 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.621 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:43.192 00:23:43.192 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:43.192 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:43.192 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:43.192 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.192 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:43.192 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.192 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.192 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.192 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:43.192 { 00:23:43.192 "cntlid": 85, 00:23:43.192 "qid": 0, 00:23:43.192 "state": "enabled", 00:23:43.192 "thread": "nvmf_tgt_poll_group_000", 00:23:43.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:43.192 "listen_address": { 00:23:43.192 "trtype": "TCP", 00:23:43.192 "adrfam": "IPv4", 00:23:43.192 "traddr": "10.0.0.2", 00:23:43.192 "trsvcid": "4420" 00:23:43.192 }, 00:23:43.192 "peer_address": { 00:23:43.192 "trtype": "TCP", 00:23:43.192 "adrfam": "IPv4", 00:23:43.192 "traddr": "10.0.0.1", 00:23:43.192 "trsvcid": "59096" 00:23:43.192 }, 00:23:43.192 "auth": { 00:23:43.192 "state": "completed", 00:23:43.192 "digest": "sha384", 00:23:43.192 "dhgroup": "ffdhe6144" 00:23:43.192 } 00:23:43.192 } 00:23:43.192 ]' 00:23:43.192 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:43.192 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:43.453 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:43.453 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:43.453 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:43.453 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:43.453 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:43.453 14:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:43.714 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:23:43.714 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:23:44.286 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:44.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:44.286 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:44.286 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.286 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.286 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.286 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:44.286 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:44.286 14:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:44.551 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:23:44.551 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:44.551 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:44.551 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:44.551 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:44.551 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:44.551 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:44.551 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.551 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.551 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.551 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:44.551 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:44.551 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:44.817 00:23:44.817 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:44.817 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:44.817 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:45.080 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.080 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:45.080 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.080 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.080 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.080 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:45.080 { 00:23:45.080 "cntlid": 87, 00:23:45.080 "qid": 0, 00:23:45.080 "state": "enabled", 00:23:45.080 "thread": "nvmf_tgt_poll_group_000", 00:23:45.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:45.080 "listen_address": { 00:23:45.080 "trtype": "TCP", 00:23:45.080 "adrfam": "IPv4", 00:23:45.080 "traddr": "10.0.0.2", 00:23:45.080 "trsvcid": "4420" 00:23:45.080 }, 00:23:45.080 "peer_address": { 00:23:45.080 "trtype": "TCP", 00:23:45.080 "adrfam": "IPv4", 00:23:45.080 "traddr": "10.0.0.1", 00:23:45.080 "trsvcid": "59126" 00:23:45.080 }, 00:23:45.080 "auth": { 00:23:45.080 "state": "completed", 00:23:45.080 "digest": "sha384", 00:23:45.080 "dhgroup": "ffdhe6144" 00:23:45.080 } 00:23:45.080 } 00:23:45.080 ]' 00:23:45.080 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:45.080 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:45.080 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:45.341 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:45.341 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:45.341 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:45.341 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:45.341 14:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:45.341 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:23:45.341 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:46.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.286 14:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.857 00:23:46.858 14:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:46.858 14:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:46.858 14:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:47.119 14:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.119 14:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:47.119 14:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.119 14:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.119 14:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.119 14:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:47.119 { 00:23:47.119 "cntlid": 89, 00:23:47.119 "qid": 0, 00:23:47.119 "state": "enabled", 00:23:47.119 "thread": "nvmf_tgt_poll_group_000", 00:23:47.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:47.119 "listen_address": { 00:23:47.119 "trtype": "TCP", 00:23:47.119 "adrfam": "IPv4", 00:23:47.119 "traddr": "10.0.0.2", 00:23:47.119 "trsvcid": "4420" 00:23:47.119 }, 00:23:47.119 "peer_address": { 00:23:47.119 "trtype": "TCP", 00:23:47.119 "adrfam": "IPv4", 00:23:47.119 "traddr": "10.0.0.1", 00:23:47.119 "trsvcid": "59138" 00:23:47.119 }, 00:23:47.119 "auth": { 00:23:47.119 "state": "completed", 00:23:47.119 "digest": "sha384", 00:23:47.119 "dhgroup": "ffdhe8192" 00:23:47.119 } 00:23:47.119 } 00:23:47.119 ]' 00:23:47.119 14:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:47.119 14:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:47.119 14:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:47.119 14:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:47.119 14:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:47.380 14:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:47.380 14:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:47.380 14:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:47.380 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:23:47.380 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:48.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.323 14:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.897 00:23:48.897 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:48.897 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:48.897 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:49.158 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.158 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:49.158 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.158 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.158 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.158 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:49.158 { 00:23:49.159 "cntlid": 91, 00:23:49.159 "qid": 0, 00:23:49.159 "state": "enabled", 00:23:49.159 "thread": "nvmf_tgt_poll_group_000", 00:23:49.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:49.159 "listen_address": { 00:23:49.159 "trtype": "TCP", 00:23:49.159 "adrfam": "IPv4", 00:23:49.159 "traddr": "10.0.0.2", 00:23:49.159 "trsvcid": "4420" 00:23:49.159 }, 00:23:49.159 "peer_address": { 00:23:49.159 "trtype": "TCP", 00:23:49.159 "adrfam": "IPv4", 00:23:49.159 "traddr": "10.0.0.1", 00:23:49.159 "trsvcid": "59170" 00:23:49.159 }, 00:23:49.159 "auth": { 00:23:49.159 "state": "completed", 00:23:49.159 "digest": "sha384", 00:23:49.159 "dhgroup": "ffdhe8192" 00:23:49.159 } 00:23:49.159 } 00:23:49.159 ]' 00:23:49.159 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:49.159 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:49.159 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:49.159 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:49.159 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:49.159 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:49.159 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:49.159 14:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:49.420 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:23:49.420 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:50.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.362 14:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.935 00:23:50.935 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:50.935 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:50.935 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:51.196 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.196 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:51.196 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.196 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.196 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.196 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:51.196 { 00:23:51.196 "cntlid": 93, 00:23:51.196 "qid": 0, 00:23:51.196 "state": "enabled", 00:23:51.196 "thread": "nvmf_tgt_poll_group_000", 00:23:51.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:51.196 "listen_address": { 00:23:51.196 "trtype": "TCP", 00:23:51.196 "adrfam": "IPv4", 00:23:51.196 "traddr": "10.0.0.2", 00:23:51.196 "trsvcid": "4420" 00:23:51.196 }, 00:23:51.196 "peer_address": { 00:23:51.196 "trtype": "TCP", 00:23:51.196 "adrfam": "IPv4", 00:23:51.196 "traddr": "10.0.0.1", 00:23:51.196 "trsvcid": "59204" 00:23:51.196 }, 00:23:51.196 "auth": { 00:23:51.196 "state": "completed", 00:23:51.196 "digest": "sha384", 00:23:51.196 "dhgroup": "ffdhe8192" 00:23:51.196 } 00:23:51.196 } 00:23:51.196 ]' 00:23:51.196 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:51.196 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:51.196 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:51.196 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:51.196 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:51.196 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:51.196 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:51.196 14:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:51.457 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:23:51.457 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:52.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:52.400 14:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:52.972 00:23:52.972 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:52.972 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:52.972 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:53.233 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.233 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:53.233 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.233 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.233 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.233 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:53.233 { 00:23:53.233 "cntlid": 95, 00:23:53.233 "qid": 0, 00:23:53.233 "state": "enabled", 00:23:53.233 "thread": "nvmf_tgt_poll_group_000", 00:23:53.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:53.233 "listen_address": { 00:23:53.233 "trtype": "TCP", 00:23:53.233 "adrfam": "IPv4", 00:23:53.233 "traddr": "10.0.0.2", 00:23:53.233 "trsvcid": "4420" 00:23:53.233 }, 00:23:53.233 "peer_address": { 00:23:53.233 "trtype": "TCP", 00:23:53.233 "adrfam": "IPv4", 00:23:53.233 "traddr": "10.0.0.1", 00:23:53.233 "trsvcid": "38842" 00:23:53.233 }, 00:23:53.233 "auth": { 00:23:53.233 "state": "completed", 00:23:53.233 "digest": "sha384", 00:23:53.233 "dhgroup": "ffdhe8192" 00:23:53.233 } 00:23:53.233 } 00:23:53.233 ]' 00:23:53.233 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:53.233 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:53.233 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:53.233 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:53.233 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:53.233 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:53.233 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:53.233 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:53.494 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:23:53.494 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:23:54.066 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:54.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:54.066 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:54.066 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.066 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.066 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.066 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:23:54.066 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:54.066 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:54.066 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:54.066 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:54.327 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:23:54.327 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:54.327 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:54.327 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:54.327 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:54.327 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:54.327 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:54.327 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.327 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.327 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.327 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:54.327 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:54.327 14:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:54.588 00:23:54.588 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:54.588 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:54.588 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:54.848 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.848 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:54.848 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.848 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.848 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.848 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:54.848 { 00:23:54.848 "cntlid": 97, 00:23:54.848 "qid": 0, 00:23:54.848 "state": "enabled", 00:23:54.848 "thread": "nvmf_tgt_poll_group_000", 00:23:54.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:54.848 "listen_address": { 00:23:54.848 "trtype": "TCP", 00:23:54.848 "adrfam": "IPv4", 00:23:54.848 "traddr": "10.0.0.2", 00:23:54.848 "trsvcid": "4420" 00:23:54.848 }, 00:23:54.848 "peer_address": { 00:23:54.848 "trtype": "TCP", 00:23:54.848 "adrfam": "IPv4", 00:23:54.848 "traddr": "10.0.0.1", 00:23:54.848 "trsvcid": "38878" 00:23:54.848 }, 00:23:54.848 "auth": { 00:23:54.848 "state": "completed", 00:23:54.848 "digest": "sha512", 00:23:54.848 "dhgroup": "null" 00:23:54.848 } 00:23:54.848 } 00:23:54.848 ]' 00:23:54.848 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:54.848 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:54.848 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:54.848 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:54.848 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:54.848 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:54.848 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:54.848 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:55.108 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:23:55.108 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:56.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.051 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.312 00:23:56.312 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:56.312 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:56.312 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:56.573 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.573 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:56.573 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.573 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.573 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.573 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:56.573 { 00:23:56.573 "cntlid": 99, 00:23:56.573 "qid": 0, 00:23:56.573 "state": "enabled", 00:23:56.573 "thread": "nvmf_tgt_poll_group_000", 00:23:56.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:56.573 "listen_address": { 00:23:56.573 "trtype": "TCP", 00:23:56.573 "adrfam": "IPv4", 00:23:56.573 "traddr": "10.0.0.2", 00:23:56.573 "trsvcid": "4420" 00:23:56.573 }, 00:23:56.573 "peer_address": { 00:23:56.573 "trtype": "TCP", 00:23:56.573 "adrfam": "IPv4", 00:23:56.573 "traddr": "10.0.0.1", 00:23:56.573 "trsvcid": "38906" 00:23:56.573 }, 00:23:56.573 "auth": { 00:23:56.573 "state": "completed", 00:23:56.573 "digest": "sha512", 00:23:56.573 "dhgroup": "null" 00:23:56.573 } 00:23:56.573 } 00:23:56.573 ]' 00:23:56.574 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:56.574 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:56.574 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:56.574 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:56.574 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:56.574 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:56.574 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:56.574 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:56.834 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:23:56.834 14:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:23:57.776 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:57.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:57.776 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:57.776 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.776 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.776 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.776 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:57.776 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:57.776 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:57.776 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:23:57.776 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:57.776 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:57.776 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:57.776 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:57.776 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:57.776 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:57.776 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.776 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.776 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.777 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:57.777 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:57.777 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:58.037 00:23:58.037 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:58.037 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:58.037 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:58.298 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.298 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:58.298 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.298 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.298 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.298 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:58.298 { 00:23:58.298 "cntlid": 101, 00:23:58.298 "qid": 0, 00:23:58.298 "state": "enabled", 00:23:58.298 "thread": "nvmf_tgt_poll_group_000", 00:23:58.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:23:58.298 "listen_address": { 00:23:58.298 "trtype": "TCP", 00:23:58.298 "adrfam": "IPv4", 00:23:58.298 "traddr": "10.0.0.2", 00:23:58.298 "trsvcid": "4420" 00:23:58.298 }, 00:23:58.298 "peer_address": { 00:23:58.298 "trtype": "TCP", 00:23:58.298 "adrfam": "IPv4", 00:23:58.298 "traddr": "10.0.0.1", 00:23:58.298 "trsvcid": "38924" 00:23:58.298 }, 00:23:58.298 "auth": { 00:23:58.298 "state": "completed", 00:23:58.298 "digest": "sha512", 00:23:58.298 "dhgroup": "null" 00:23:58.298 } 00:23:58.298 } 00:23:58.298 ]' 00:23:58.298 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:58.298 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:58.298 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:58.298 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:58.298 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:58.298 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:58.298 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:58.298 14:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:58.559 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:23:58.559 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:23:59.502 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:59.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:59.502 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:59.502 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.502 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.502 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.502 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:59.502 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:59.502 14:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:59.502 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:23:59.502 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:59.502 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:59.502 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:59.502 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:59.502 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:59.502 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:23:59.502 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.502 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.502 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.502 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:59.502 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:59.502 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:59.762 00:23:59.762 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:59.762 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:59.762 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:00.023 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.023 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:00.023 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.023 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.023 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.023 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:00.023 { 00:24:00.023 "cntlid": 103, 00:24:00.023 "qid": 0, 00:24:00.024 "state": "enabled", 00:24:00.024 "thread": "nvmf_tgt_poll_group_000", 00:24:00.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:00.024 "listen_address": { 00:24:00.024 "trtype": "TCP", 00:24:00.024 "adrfam": "IPv4", 00:24:00.024 "traddr": "10.0.0.2", 00:24:00.024 "trsvcid": "4420" 00:24:00.024 }, 00:24:00.024 "peer_address": { 00:24:00.024 "trtype": "TCP", 00:24:00.024 "adrfam": "IPv4", 00:24:00.024 "traddr": "10.0.0.1", 00:24:00.024 "trsvcid": "38952" 00:24:00.024 }, 00:24:00.024 "auth": { 00:24:00.024 "state": "completed", 00:24:00.024 "digest": "sha512", 00:24:00.024 "dhgroup": "null" 00:24:00.024 } 00:24:00.024 } 00:24:00.024 ]' 00:24:00.024 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:00.024 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:00.024 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:00.024 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:24:00.024 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:00.024 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:00.024 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:00.024 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:00.284 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:24:00.284 14:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:01.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.226 14:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.487 00:24:01.487 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:01.487 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:01.487 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:01.747 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.747 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:01.747 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.747 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.747 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.747 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:01.747 { 00:24:01.747 "cntlid": 105, 00:24:01.747 "qid": 0, 00:24:01.747 "state": "enabled", 00:24:01.747 "thread": "nvmf_tgt_poll_group_000", 00:24:01.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:01.747 "listen_address": { 00:24:01.747 "trtype": "TCP", 00:24:01.747 "adrfam": "IPv4", 00:24:01.747 "traddr": "10.0.0.2", 00:24:01.747 "trsvcid": "4420" 00:24:01.747 }, 00:24:01.747 "peer_address": { 00:24:01.747 "trtype": "TCP", 00:24:01.747 "adrfam": "IPv4", 00:24:01.747 "traddr": "10.0.0.1", 00:24:01.747 "trsvcid": "43770" 00:24:01.747 }, 00:24:01.747 "auth": { 00:24:01.747 "state": "completed", 00:24:01.747 "digest": "sha512", 00:24:01.747 "dhgroup": "ffdhe2048" 00:24:01.747 } 00:24:01.747 } 00:24:01.747 ]' 00:24:01.747 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:01.747 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:01.747 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:01.748 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:01.748 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:01.748 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:01.748 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:01.748 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:02.007 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:24:02.007 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:02.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.948 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:03.209 00:24:03.209 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:03.209 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:03.209 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:03.470 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.470 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:03.470 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.470 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.470 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.470 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:03.470 { 00:24:03.470 "cntlid": 107, 00:24:03.470 "qid": 0, 00:24:03.470 "state": "enabled", 00:24:03.470 "thread": "nvmf_tgt_poll_group_000", 00:24:03.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:03.470 "listen_address": { 00:24:03.470 "trtype": "TCP", 00:24:03.470 "adrfam": "IPv4", 00:24:03.470 "traddr": "10.0.0.2", 00:24:03.470 "trsvcid": "4420" 00:24:03.470 }, 00:24:03.470 "peer_address": { 00:24:03.470 "trtype": "TCP", 00:24:03.470 "adrfam": "IPv4", 00:24:03.470 "traddr": "10.0.0.1", 00:24:03.470 "trsvcid": "43794" 00:24:03.470 }, 00:24:03.470 "auth": { 00:24:03.470 "state": "completed", 00:24:03.470 "digest": "sha512", 00:24:03.470 "dhgroup": "ffdhe2048" 00:24:03.470 } 00:24:03.470 } 00:24:03.470 ]' 00:24:03.470 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:03.470 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:03.470 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:03.470 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:03.470 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:03.471 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:03.471 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:03.471 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:03.735 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:24:03.735 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:04.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:04.676 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:04.936 00:24:04.936 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:04.936 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:04.936 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:04.936 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.197 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:05.197 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.197 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.197 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.197 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:05.197 { 00:24:05.197 "cntlid": 109, 00:24:05.197 "qid": 0, 00:24:05.197 "state": "enabled", 00:24:05.197 "thread": "nvmf_tgt_poll_group_000", 00:24:05.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:05.197 "listen_address": { 00:24:05.197 "trtype": "TCP", 00:24:05.197 "adrfam": "IPv4", 00:24:05.197 "traddr": "10.0.0.2", 00:24:05.197 "trsvcid": "4420" 00:24:05.197 }, 00:24:05.197 "peer_address": { 00:24:05.197 "trtype": "TCP", 00:24:05.197 "adrfam": "IPv4", 00:24:05.197 "traddr": "10.0.0.1", 00:24:05.197 "trsvcid": "43810" 00:24:05.197 }, 00:24:05.197 "auth": { 00:24:05.197 "state": "completed", 00:24:05.197 "digest": "sha512", 00:24:05.197 "dhgroup": "ffdhe2048" 00:24:05.197 } 00:24:05.197 } 00:24:05.197 ]' 00:24:05.197 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:05.197 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:05.197 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:05.197 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:05.197 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:05.197 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:05.197 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:05.197 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:05.457 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:24:05.457 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:24:06.027 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:06.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:06.027 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:06.027 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.027 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.027 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.028 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:06.028 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:06.028 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:06.288 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:24:06.288 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:06.288 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:06.288 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:24:06.288 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:06.288 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:06.288 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:06.288 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.288 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.288 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.288 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:06.288 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:06.288 14:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:06.549 00:24:06.549 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:06.549 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:06.549 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:06.809 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.809 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:06.809 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.809 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.810 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.810 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:06.810 { 00:24:06.810 "cntlid": 111, 00:24:06.810 "qid": 0, 00:24:06.810 "state": "enabled", 00:24:06.810 "thread": "nvmf_tgt_poll_group_000", 00:24:06.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:06.810 "listen_address": { 00:24:06.810 "trtype": "TCP", 00:24:06.810 "adrfam": "IPv4", 00:24:06.810 "traddr": "10.0.0.2", 00:24:06.810 "trsvcid": "4420" 00:24:06.810 }, 00:24:06.810 "peer_address": { 00:24:06.810 "trtype": "TCP", 00:24:06.810 "adrfam": "IPv4", 00:24:06.810 "traddr": "10.0.0.1", 00:24:06.810 "trsvcid": "43848" 00:24:06.810 }, 00:24:06.810 "auth": { 00:24:06.810 "state": "completed", 00:24:06.810 "digest": "sha512", 00:24:06.810 "dhgroup": "ffdhe2048" 00:24:06.810 } 00:24:06.810 } 00:24:06.810 ]' 00:24:06.810 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:06.810 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:06.810 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:06.810 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:06.810 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:06.810 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:06.810 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:06.810 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:07.071 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:24:07.071 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:24:08.014 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:08.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:08.014 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.015 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.276 00:24:08.276 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:08.276 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:08.276 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:08.537 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.537 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:08.537 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.537 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.537 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.537 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:08.537 { 00:24:08.537 "cntlid": 113, 00:24:08.537 "qid": 0, 00:24:08.537 "state": "enabled", 00:24:08.537 "thread": "nvmf_tgt_poll_group_000", 00:24:08.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:08.537 "listen_address": { 00:24:08.537 "trtype": "TCP", 00:24:08.537 "adrfam": "IPv4", 00:24:08.537 "traddr": "10.0.0.2", 00:24:08.537 "trsvcid": "4420" 00:24:08.537 }, 00:24:08.537 "peer_address": { 00:24:08.537 "trtype": "TCP", 00:24:08.537 "adrfam": "IPv4", 00:24:08.537 "traddr": "10.0.0.1", 00:24:08.537 "trsvcid": "43878" 00:24:08.537 }, 00:24:08.537 "auth": { 00:24:08.537 "state": "completed", 00:24:08.537 "digest": "sha512", 00:24:08.537 "dhgroup": "ffdhe3072" 00:24:08.537 } 00:24:08.537 } 00:24:08.537 ]' 00:24:08.537 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:08.537 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:08.537 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:08.537 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:08.537 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:08.537 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:08.537 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:08.537 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:08.798 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:24:08.798 14:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:09.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:09.740 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.000 00:24:10.000 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:10.000 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:10.000 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:10.262 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.262 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:10.262 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.262 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.262 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.262 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:10.262 { 00:24:10.262 "cntlid": 115, 00:24:10.262 "qid": 0, 00:24:10.262 "state": "enabled", 00:24:10.262 "thread": "nvmf_tgt_poll_group_000", 00:24:10.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:10.262 "listen_address": { 00:24:10.262 "trtype": "TCP", 00:24:10.262 "adrfam": "IPv4", 00:24:10.262 "traddr": "10.0.0.2", 00:24:10.262 "trsvcid": "4420" 00:24:10.262 }, 00:24:10.262 "peer_address": { 00:24:10.262 "trtype": "TCP", 00:24:10.262 "adrfam": "IPv4", 00:24:10.262 "traddr": "10.0.0.1", 00:24:10.262 "trsvcid": "43886" 00:24:10.262 }, 00:24:10.262 "auth": { 00:24:10.262 "state": "completed", 00:24:10.262 "digest": "sha512", 00:24:10.262 "dhgroup": "ffdhe3072" 00:24:10.262 } 00:24:10.262 } 00:24:10.262 ]' 00:24:10.262 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:10.262 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:10.262 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:10.523 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:10.523 14:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:10.523 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:10.523 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:10.523 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:10.523 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:24:10.523 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:24:11.465 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:11.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:11.465 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:11.465 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.465 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.465 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.465 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:11.465 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:11.465 14:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:11.465 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:24:11.465 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:11.465 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:11.465 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:11.465 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:11.465 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:11.465 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:11.465 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.465 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.465 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.465 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:11.465 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:11.465 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:11.726 00:24:11.726 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:11.726 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:11.726 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:11.986 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.986 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:11.986 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.986 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.986 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.986 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:11.986 { 00:24:11.986 "cntlid": 117, 00:24:11.986 "qid": 0, 00:24:11.986 "state": "enabled", 00:24:11.986 "thread": "nvmf_tgt_poll_group_000", 00:24:11.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:11.986 "listen_address": { 00:24:11.986 "trtype": "TCP", 00:24:11.986 "adrfam": "IPv4", 00:24:11.986 "traddr": "10.0.0.2", 00:24:11.986 "trsvcid": "4420" 00:24:11.986 }, 00:24:11.986 "peer_address": { 00:24:11.986 "trtype": "TCP", 00:24:11.986 "adrfam": "IPv4", 00:24:11.986 "traddr": "10.0.0.1", 00:24:11.986 "trsvcid": "42190" 00:24:11.986 }, 00:24:11.986 "auth": { 00:24:11.986 "state": "completed", 00:24:11.986 "digest": "sha512", 00:24:11.986 "dhgroup": "ffdhe3072" 00:24:11.986 } 00:24:11.986 } 00:24:11.986 ]' 00:24:11.986 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:11.986 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:11.986 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:12.248 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:12.248 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:12.248 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:12.248 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:12.248 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:12.248 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:24:12.248 14:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:24:13.189 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:13.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:13.189 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:13.189 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.189 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.189 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.189 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:13.189 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:13.189 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:13.189 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:24:13.189 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:13.189 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:13.189 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:24:13.189 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:13.189 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:13.189 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:13.189 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.189 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.190 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.190 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:13.190 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:13.190 14:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:13.450 00:24:13.450 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:13.450 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:13.450 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:13.711 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.711 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:13.711 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.711 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.711 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.711 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:13.711 { 00:24:13.711 "cntlid": 119, 00:24:13.711 "qid": 0, 00:24:13.711 "state": "enabled", 00:24:13.711 "thread": "nvmf_tgt_poll_group_000", 00:24:13.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:13.711 "listen_address": { 00:24:13.711 "trtype": "TCP", 00:24:13.711 "adrfam": "IPv4", 00:24:13.711 "traddr": "10.0.0.2", 00:24:13.711 "trsvcid": "4420" 00:24:13.711 }, 00:24:13.711 "peer_address": { 00:24:13.711 "trtype": "TCP", 00:24:13.711 "adrfam": "IPv4", 00:24:13.711 "traddr": "10.0.0.1", 00:24:13.711 "trsvcid": "42228" 00:24:13.711 }, 00:24:13.711 "auth": { 00:24:13.711 "state": "completed", 00:24:13.711 "digest": "sha512", 00:24:13.711 "dhgroup": "ffdhe3072" 00:24:13.711 } 00:24:13.711 } 00:24:13.711 ]' 00:24:13.711 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:13.711 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:13.711 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:13.711 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:13.712 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:13.973 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:13.973 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:13.973 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:13.973 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:24:13.973 14:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:24:14.913 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:14.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:14.913 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:14.913 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.913 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.913 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.913 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.913 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:14.913 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:14.913 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:14.913 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:24:14.913 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:14.913 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:14.913 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:14.913 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:14.913 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:14.913 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.913 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.914 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.914 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.914 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.914 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.914 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:15.174 00:24:15.174 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:15.174 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:15.174 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:15.434 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.434 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:15.434 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.434 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.435 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.435 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:15.435 { 00:24:15.435 "cntlid": 121, 00:24:15.435 "qid": 0, 00:24:15.435 "state": "enabled", 00:24:15.435 "thread": "nvmf_tgt_poll_group_000", 00:24:15.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:15.435 "listen_address": { 00:24:15.435 "trtype": "TCP", 00:24:15.435 "adrfam": "IPv4", 00:24:15.435 "traddr": "10.0.0.2", 00:24:15.435 "trsvcid": "4420" 00:24:15.435 }, 00:24:15.435 "peer_address": { 00:24:15.435 "trtype": "TCP", 00:24:15.435 "adrfam": "IPv4", 00:24:15.435 "traddr": "10.0.0.1", 00:24:15.435 "trsvcid": "42254" 00:24:15.435 }, 00:24:15.435 "auth": { 00:24:15.435 "state": "completed", 00:24:15.435 "digest": "sha512", 00:24:15.435 "dhgroup": "ffdhe4096" 00:24:15.435 } 00:24:15.435 } 00:24:15.435 ]' 00:24:15.435 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:15.435 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:15.435 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:15.435 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:15.435 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:15.696 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:15.696 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:15.696 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:15.696 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:24:15.696 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:16.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:16.638 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:16.900 00:24:16.900 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:16.900 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:17.161 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:17.161 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.161 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:17.161 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.161 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.161 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.161 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:17.161 { 00:24:17.161 "cntlid": 123, 00:24:17.161 "qid": 0, 00:24:17.161 "state": "enabled", 00:24:17.161 "thread": "nvmf_tgt_poll_group_000", 00:24:17.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:17.161 "listen_address": { 00:24:17.161 "trtype": "TCP", 00:24:17.161 "adrfam": "IPv4", 00:24:17.161 "traddr": "10.0.0.2", 00:24:17.161 "trsvcid": "4420" 00:24:17.161 }, 00:24:17.161 "peer_address": { 00:24:17.161 "trtype": "TCP", 00:24:17.161 "adrfam": "IPv4", 00:24:17.161 "traddr": "10.0.0.1", 00:24:17.161 "trsvcid": "42288" 00:24:17.161 }, 00:24:17.161 "auth": { 00:24:17.161 "state": "completed", 00:24:17.161 "digest": "sha512", 00:24:17.161 "dhgroup": "ffdhe4096" 00:24:17.161 } 00:24:17.161 } 00:24:17.161 ]' 00:24:17.161 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:17.161 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:17.161 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:17.422 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:17.422 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:17.422 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:17.422 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:17.422 14:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:17.422 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:24:17.422 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:24:18.363 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:18.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:18.363 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:18.363 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.363 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.363 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.363 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:18.363 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:18.364 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:18.364 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:24:18.364 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:18.364 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:18.364 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:18.364 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:18.364 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:18.364 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.364 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.364 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.624 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.624 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.625 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.625 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.625 00:24:18.886 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:18.886 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:18.886 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:18.886 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.886 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:18.886 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.886 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.886 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.886 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:18.886 { 00:24:18.886 "cntlid": 125, 00:24:18.886 "qid": 0, 00:24:18.886 "state": "enabled", 00:24:18.886 "thread": "nvmf_tgt_poll_group_000", 00:24:18.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:18.886 "listen_address": { 00:24:18.886 "trtype": "TCP", 00:24:18.886 "adrfam": "IPv4", 00:24:18.886 "traddr": "10.0.0.2", 00:24:18.886 "trsvcid": "4420" 00:24:18.886 }, 00:24:18.886 "peer_address": { 00:24:18.886 "trtype": "TCP", 00:24:18.886 "adrfam": "IPv4", 00:24:18.886 "traddr": "10.0.0.1", 00:24:18.886 "trsvcid": "42308" 00:24:18.886 }, 00:24:18.886 "auth": { 00:24:18.886 "state": "completed", 00:24:18.886 "digest": "sha512", 00:24:18.886 "dhgroup": "ffdhe4096" 00:24:18.886 } 00:24:18.886 } 00:24:18.886 ]' 00:24:18.886 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:18.886 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:18.886 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:19.147 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:19.147 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:19.147 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:19.147 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:19.147 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:19.147 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:24:19.147 14:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:20.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:20.089 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:20.350 00:24:20.350 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:20.350 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:20.350 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:20.611 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.611 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:20.611 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.611 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.611 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.611 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:20.611 { 00:24:20.611 "cntlid": 127, 00:24:20.611 "qid": 0, 00:24:20.611 "state": "enabled", 00:24:20.611 "thread": "nvmf_tgt_poll_group_000", 00:24:20.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:20.611 "listen_address": { 00:24:20.611 "trtype": "TCP", 00:24:20.611 "adrfam": "IPv4", 00:24:20.611 "traddr": "10.0.0.2", 00:24:20.611 "trsvcid": "4420" 00:24:20.611 }, 00:24:20.611 "peer_address": { 00:24:20.611 "trtype": "TCP", 00:24:20.611 "adrfam": "IPv4", 00:24:20.611 "traddr": "10.0.0.1", 00:24:20.611 "trsvcid": "42340" 00:24:20.611 }, 00:24:20.611 "auth": { 00:24:20.611 "state": "completed", 00:24:20.611 "digest": "sha512", 00:24:20.611 "dhgroup": "ffdhe4096" 00:24:20.611 } 00:24:20.611 } 00:24:20.611 ]' 00:24:20.611 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:20.611 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:20.611 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:20.611 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:20.611 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:20.611 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:20.611 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:20.611 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:20.871 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:24:20.871 14:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:21.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:21.814 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.075 00:24:22.075 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:22.075 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:22.075 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:22.335 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.336 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:22.336 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.336 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.336 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.336 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:22.336 { 00:24:22.336 "cntlid": 129, 00:24:22.336 "qid": 0, 00:24:22.336 "state": "enabled", 00:24:22.336 "thread": "nvmf_tgt_poll_group_000", 00:24:22.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:22.336 "listen_address": { 00:24:22.336 "trtype": "TCP", 00:24:22.336 "adrfam": "IPv4", 00:24:22.336 "traddr": "10.0.0.2", 00:24:22.336 "trsvcid": "4420" 00:24:22.336 }, 00:24:22.336 "peer_address": { 00:24:22.336 "trtype": "TCP", 00:24:22.336 "adrfam": "IPv4", 00:24:22.336 "traddr": "10.0.0.1", 00:24:22.336 "trsvcid": "35474" 00:24:22.336 }, 00:24:22.336 "auth": { 00:24:22.336 "state": "completed", 00:24:22.336 "digest": "sha512", 00:24:22.336 "dhgroup": "ffdhe6144" 00:24:22.336 } 00:24:22.336 } 00:24:22.336 ]' 00:24:22.336 14:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:22.336 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:22.336 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:22.597 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:22.597 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:22.597 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:22.597 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:22.597 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:22.597 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:24:22.597 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:24:23.539 14:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:23.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.539 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.110 00:24:24.110 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:24.110 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:24.110 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:24.110 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.110 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:24.110 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.110 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.110 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.110 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:24.110 { 00:24:24.110 "cntlid": 131, 00:24:24.110 "qid": 0, 00:24:24.110 "state": "enabled", 00:24:24.110 "thread": "nvmf_tgt_poll_group_000", 00:24:24.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:24.110 "listen_address": { 00:24:24.110 "trtype": "TCP", 00:24:24.110 "adrfam": "IPv4", 00:24:24.110 "traddr": "10.0.0.2", 00:24:24.110 "trsvcid": "4420" 00:24:24.110 }, 00:24:24.110 "peer_address": { 00:24:24.110 "trtype": "TCP", 00:24:24.110 "adrfam": "IPv4", 00:24:24.110 "traddr": "10.0.0.1", 00:24:24.110 "trsvcid": "35502" 00:24:24.110 }, 00:24:24.110 "auth": { 00:24:24.110 "state": "completed", 00:24:24.110 "digest": "sha512", 00:24:24.110 "dhgroup": "ffdhe6144" 00:24:24.110 } 00:24:24.110 } 00:24:24.110 ]' 00:24:24.110 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:24.110 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:24.110 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:24.371 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:24.371 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:24.371 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:24.371 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:24.371 14:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:24.371 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:24:24.371 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:24:25.332 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:25.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:25.332 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:25.332 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.332 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.332 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.332 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:25.332 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:25.332 14:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:25.332 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:24:25.332 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:25.332 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:25.332 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:25.332 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:25.332 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:25.332 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.332 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.332 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.593 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.593 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.593 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.593 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.854 00:24:25.855 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:25.855 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:25.855 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:26.116 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.116 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:26.116 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.116 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.116 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.116 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:26.116 { 00:24:26.116 "cntlid": 133, 00:24:26.116 "qid": 0, 00:24:26.116 "state": "enabled", 00:24:26.116 "thread": "nvmf_tgt_poll_group_000", 00:24:26.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:26.116 "listen_address": { 00:24:26.116 "trtype": "TCP", 00:24:26.116 "adrfam": "IPv4", 00:24:26.116 "traddr": "10.0.0.2", 00:24:26.116 "trsvcid": "4420" 00:24:26.116 }, 00:24:26.116 "peer_address": { 00:24:26.116 "trtype": "TCP", 00:24:26.116 "adrfam": "IPv4", 00:24:26.116 "traddr": "10.0.0.1", 00:24:26.116 "trsvcid": "35524" 00:24:26.116 }, 00:24:26.116 "auth": { 00:24:26.116 "state": "completed", 00:24:26.116 "digest": "sha512", 00:24:26.116 "dhgroup": "ffdhe6144" 00:24:26.116 } 00:24:26.116 } 00:24:26.116 ]' 00:24:26.116 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:26.116 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:26.116 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:26.116 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:26.116 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:26.116 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:26.116 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:26.116 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:26.377 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:24:26.377 14:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:27.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:27.318 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:27.580 00:24:27.580 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:27.580 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:27.580 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:27.841 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.841 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:27.841 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.841 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.841 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.841 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:27.841 { 00:24:27.841 "cntlid": 135, 00:24:27.841 "qid": 0, 00:24:27.841 "state": "enabled", 00:24:27.841 "thread": "nvmf_tgt_poll_group_000", 00:24:27.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:27.841 "listen_address": { 00:24:27.841 "trtype": "TCP", 00:24:27.841 "adrfam": "IPv4", 00:24:27.841 "traddr": "10.0.0.2", 00:24:27.841 "trsvcid": "4420" 00:24:27.841 }, 00:24:27.841 "peer_address": { 00:24:27.841 "trtype": "TCP", 00:24:27.841 "adrfam": "IPv4", 00:24:27.841 "traddr": "10.0.0.1", 00:24:27.841 "trsvcid": "35552" 00:24:27.841 }, 00:24:27.841 "auth": { 00:24:27.841 "state": "completed", 00:24:27.841 "digest": "sha512", 00:24:27.841 "dhgroup": "ffdhe6144" 00:24:27.841 } 00:24:27.841 } 00:24:27.841 ]' 00:24:27.841 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:27.841 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:27.841 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:28.102 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:28.102 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:28.102 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:28.102 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:28.102 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:28.102 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:24:28.102 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:29.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.044 14:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.634 00:24:29.634 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:29.634 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:29.634 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:29.960 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.960 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:29.960 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.960 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.960 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.960 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:29.960 { 00:24:29.960 "cntlid": 137, 00:24:29.960 "qid": 0, 00:24:29.960 "state": "enabled", 00:24:29.960 "thread": "nvmf_tgt_poll_group_000", 00:24:29.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:29.960 "listen_address": { 00:24:29.960 "trtype": "TCP", 00:24:29.960 "adrfam": "IPv4", 00:24:29.960 "traddr": "10.0.0.2", 00:24:29.960 "trsvcid": "4420" 00:24:29.960 }, 00:24:29.960 "peer_address": { 00:24:29.960 "trtype": "TCP", 00:24:29.960 "adrfam": "IPv4", 00:24:29.960 "traddr": "10.0.0.1", 00:24:29.960 "trsvcid": "35574" 00:24:29.960 }, 00:24:29.960 "auth": { 00:24:29.960 "state": "completed", 00:24:29.960 "digest": "sha512", 00:24:29.960 "dhgroup": "ffdhe8192" 00:24:29.960 } 00:24:29.960 } 00:24:29.960 ]' 00:24:29.960 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:29.961 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:29.961 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:29.961 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:29.961 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:29.961 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:29.961 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:29.961 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:30.242 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:24:30.242 14:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:24:30.834 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:31.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:31.094 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:31.094 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.094 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.094 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.094 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:31.094 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:31.094 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:31.094 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:24:31.094 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:31.094 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:31.094 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:31.095 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:24:31.095 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:31.095 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.095 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.095 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.095 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.095 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.095 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.095 14:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.665 00:24:31.665 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:31.665 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:31.665 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:31.924 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.924 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:31.924 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.924 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.924 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.924 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:31.924 { 00:24:31.924 "cntlid": 139, 00:24:31.924 "qid": 0, 00:24:31.924 "state": "enabled", 00:24:31.924 "thread": "nvmf_tgt_poll_group_000", 00:24:31.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:31.924 "listen_address": { 00:24:31.924 "trtype": "TCP", 00:24:31.924 "adrfam": "IPv4", 00:24:31.924 "traddr": "10.0.0.2", 00:24:31.924 "trsvcid": "4420" 00:24:31.924 }, 00:24:31.924 "peer_address": { 00:24:31.924 "trtype": "TCP", 00:24:31.924 "adrfam": "IPv4", 00:24:31.924 "traddr": "10.0.0.1", 00:24:31.924 "trsvcid": "35594" 00:24:31.924 }, 00:24:31.924 "auth": { 00:24:31.924 "state": "completed", 00:24:31.924 "digest": "sha512", 00:24:31.924 "dhgroup": "ffdhe8192" 00:24:31.924 } 00:24:31.924 } 00:24:31.924 ]' 00:24:31.924 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:31.924 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:31.924 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:31.924 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:31.924 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:31.924 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:31.924 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:31.924 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:32.185 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:24:32.185 14:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: --dhchap-ctrl-secret DHHC-1:02:NWFjM2E1MzZhOTQ2YmYxNTQwYzZhMTFhYjE0Yzg3N2U1ZTYxYjcxNDcyMzliZjJinW+HsQ==: 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:33.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.127 14:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.698 00:24:33.698 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:33.698 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:33.698 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:33.959 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.959 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:33.959 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.959 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.959 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.959 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:33.959 { 00:24:33.959 "cntlid": 141, 00:24:33.959 "qid": 0, 00:24:33.959 "state": "enabled", 00:24:33.959 "thread": "nvmf_tgt_poll_group_000", 00:24:33.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:33.959 "listen_address": { 00:24:33.959 "trtype": "TCP", 00:24:33.959 "adrfam": "IPv4", 00:24:33.959 "traddr": "10.0.0.2", 00:24:33.959 "trsvcid": "4420" 00:24:33.959 }, 00:24:33.959 "peer_address": { 00:24:33.959 "trtype": "TCP", 00:24:33.959 "adrfam": "IPv4", 00:24:33.959 "traddr": "10.0.0.1", 00:24:33.959 "trsvcid": "53124" 00:24:33.960 }, 00:24:33.960 "auth": { 00:24:33.960 "state": "completed", 00:24:33.960 "digest": "sha512", 00:24:33.960 "dhgroup": "ffdhe8192" 00:24:33.960 } 00:24:33.960 } 00:24:33.960 ]' 00:24:33.960 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:33.960 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:33.960 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:33.960 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:33.960 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:33.960 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:33.960 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:33.960 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:34.220 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:24:34.220 14:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:01:NmNmNDlhYWU3ODk1NGNmZTY2ZDY2ODEyZjYyMTNjOWWF1YfP: 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:35.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:35.170 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:35.742 00:24:35.742 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:35.742 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:35.742 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:36.003 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.003 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:36.003 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.003 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.003 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.003 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:36.003 { 00:24:36.003 "cntlid": 143, 00:24:36.003 "qid": 0, 00:24:36.003 "state": "enabled", 00:24:36.003 "thread": "nvmf_tgt_poll_group_000", 00:24:36.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:36.003 "listen_address": { 00:24:36.003 "trtype": "TCP", 00:24:36.003 "adrfam": "IPv4", 00:24:36.003 "traddr": "10.0.0.2", 00:24:36.003 "trsvcid": "4420" 00:24:36.003 }, 00:24:36.003 "peer_address": { 00:24:36.003 "trtype": "TCP", 00:24:36.003 "adrfam": "IPv4", 00:24:36.003 "traddr": "10.0.0.1", 00:24:36.003 "trsvcid": "53146" 00:24:36.003 }, 00:24:36.003 "auth": { 00:24:36.003 "state": "completed", 00:24:36.003 "digest": "sha512", 00:24:36.003 "dhgroup": "ffdhe8192" 00:24:36.003 } 00:24:36.003 } 00:24:36.003 ]' 00:24:36.003 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:36.003 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:36.003 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:36.003 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:36.003 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:36.003 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:36.003 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:36.003 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:36.265 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:24:36.265 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:24:36.836 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:36.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:36.836 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:36.836 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.836 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.836 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.836 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:24:36.836 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:24:36.836 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:24:36.836 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:36.836 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:36.836 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:37.096 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:24:37.096 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:37.096 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:37.096 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:37.096 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:24:37.096 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:37.096 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:37.096 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.096 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.096 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.096 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:37.096 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:37.096 14:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:37.668 00:24:37.668 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:37.668 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:37.668 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:37.929 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.929 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:37.929 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.929 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:37.929 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.929 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:37.929 { 00:24:37.929 "cntlid": 145, 00:24:37.929 "qid": 0, 00:24:37.929 "state": "enabled", 00:24:37.929 "thread": "nvmf_tgt_poll_group_000", 00:24:37.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:37.929 "listen_address": { 00:24:37.929 "trtype": "TCP", 00:24:37.929 "adrfam": "IPv4", 00:24:37.929 "traddr": "10.0.0.2", 00:24:37.929 "trsvcid": "4420" 00:24:37.929 }, 00:24:37.929 "peer_address": { 00:24:37.929 "trtype": "TCP", 00:24:37.929 "adrfam": "IPv4", 00:24:37.929 "traddr": "10.0.0.1", 00:24:37.930 "trsvcid": "53168" 00:24:37.930 }, 00:24:37.930 "auth": { 00:24:37.930 "state": "completed", 00:24:37.930 "digest": "sha512", 00:24:37.930 "dhgroup": "ffdhe8192" 00:24:37.930 } 00:24:37.930 } 00:24:37.930 ]' 00:24:37.930 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:37.930 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:37.930 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:37.930 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:37.930 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:37.930 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:37.930 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:37.930 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:38.191 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:24:38.191 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:MmNiZTRlYWVjYmYxYTIxOTFlOTMxZjQzN2U1NTIxNDY4NGQxYjZmNDg0OWMxNWY0v64bTg==: --dhchap-ctrl-secret DHHC-1:03:MTJiZmNkMGYxOTk4ZmU3M2E0YzI5NzU2MmM1MTBhNGIyYzU1MGU3NWQ3Yjc0YWU0NGU5ODc1MDA5NDFjZDUwMu0gXf8=: 00:24:38.762 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:39.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:39.022 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:39.022 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.022 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.022 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.022 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:24:39.022 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.022 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.022 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.022 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:24:39.022 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:39.022 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:24:39.022 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:39.022 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:39.022 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:39.023 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:39.023 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:24:39.023 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:24:39.023 14:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:24:39.283 request: 00:24:39.283 { 00:24:39.283 "name": "nvme0", 00:24:39.283 "trtype": "tcp", 00:24:39.283 "traddr": "10.0.0.2", 00:24:39.283 "adrfam": "ipv4", 00:24:39.284 "trsvcid": "4420", 00:24:39.284 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:39.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:39.284 "prchk_reftag": false, 00:24:39.284 "prchk_guard": false, 00:24:39.284 "hdgst": false, 00:24:39.284 "ddgst": false, 00:24:39.284 "dhchap_key": "key2", 00:24:39.284 "allow_unrecognized_csi": false, 00:24:39.284 "method": "bdev_nvme_attach_controller", 00:24:39.284 "req_id": 1 00:24:39.284 } 00:24:39.284 Got JSON-RPC error response 00:24:39.284 response: 00:24:39.284 { 00:24:39.284 "code": -5, 00:24:39.284 "message": "Input/output error" 00:24:39.284 } 00:24:39.545 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:39.545 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:39.545 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:39.545 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:39.545 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:39.545 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.545 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.545 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.545 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:39.545 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.545 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.545 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.545 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:39.546 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:39.546 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:39.546 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:39.546 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:39.546 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:39.546 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:39.546 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:39.546 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:39.546 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:39.806 request: 00:24:39.806 { 00:24:39.807 "name": "nvme0", 00:24:39.807 "trtype": "tcp", 00:24:39.807 "traddr": "10.0.0.2", 00:24:39.807 "adrfam": "ipv4", 00:24:39.807 "trsvcid": "4420", 00:24:39.807 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:39.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:39.807 "prchk_reftag": false, 00:24:39.807 "prchk_guard": false, 00:24:39.807 "hdgst": false, 00:24:39.807 "ddgst": false, 00:24:39.807 "dhchap_key": "key1", 00:24:39.807 "dhchap_ctrlr_key": "ckey2", 00:24:39.807 "allow_unrecognized_csi": false, 00:24:39.807 "method": "bdev_nvme_attach_controller", 00:24:39.807 "req_id": 1 00:24:39.807 } 00:24:39.807 Got JSON-RPC error response 00:24:39.807 response: 00:24:39.807 { 00:24:39.807 "code": -5, 00:24:39.807 "message": "Input/output error" 00:24:39.807 } 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:40.068 14:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:40.329 request: 00:24:40.329 { 00:24:40.329 "name": "nvme0", 00:24:40.329 "trtype": "tcp", 00:24:40.329 "traddr": "10.0.0.2", 00:24:40.329 "adrfam": "ipv4", 00:24:40.329 "trsvcid": "4420", 00:24:40.329 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:40.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:40.329 "prchk_reftag": false, 00:24:40.329 "prchk_guard": false, 00:24:40.329 "hdgst": false, 00:24:40.329 "ddgst": false, 00:24:40.329 "dhchap_key": "key1", 00:24:40.329 "dhchap_ctrlr_key": "ckey1", 00:24:40.329 "allow_unrecognized_csi": false, 00:24:40.329 "method": "bdev_nvme_attach_controller", 00:24:40.329 "req_id": 1 00:24:40.329 } 00:24:40.329 Got JSON-RPC error response 00:24:40.329 response: 00:24:40.329 { 00:24:40.329 "code": -5, 00:24:40.329 "message": "Input/output error" 00:24:40.329 } 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3018548 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3018548 ']' 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3018548 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3018548 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3018548' 00:24:40.591 killing process with pid 3018548 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3018548 00:24:40.591 14:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3018548 00:24:41.536 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:24:41.536 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:41.536 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:41.536 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.536 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=3046207 00:24:41.536 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 3046207 00:24:41.536 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:24:41.536 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3046207 ']' 00:24:41.536 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.536 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:41.536 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.536 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:41.536 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.479 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:42.479 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:24:42.479 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:42.479 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:42.479 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.479 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.479 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:24:42.479 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3046207 00:24:42.479 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3046207 ']' 00:24:42.479 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.479 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:42.479 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.479 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:42.479 14:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.479 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:42.479 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:24:42.479 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:24:42.479 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.479 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.740 null0 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ryi 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.0og ]] 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0og 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.6Jd 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.kmx ]] 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kmx 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.s6i 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.740 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.OKB ]] 00:24:42.741 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OKB 00:24:42.741 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.741 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.741 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.741 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:24:42.741 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.y9T 00:24:42.741 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.741 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.001 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.001 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:24:43.001 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:24:43.001 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:24:43.001 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:24:43.001 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:24:43.001 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:24:43.001 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:24:43.001 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:43.001 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.001 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.001 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.001 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:43.001 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:43.001 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:43.945 nvme0n1 00:24:43.945 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:24:43.945 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:43.945 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:24:43.945 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.945 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:43.945 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.945 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.945 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.945 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:24:43.945 { 00:24:43.945 "cntlid": 1, 00:24:43.945 "qid": 0, 00:24:43.945 "state": "enabled", 00:24:43.945 "thread": "nvmf_tgt_poll_group_000", 00:24:43.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:43.945 "listen_address": { 00:24:43.945 "trtype": "TCP", 00:24:43.945 "adrfam": "IPv4", 00:24:43.945 "traddr": "10.0.0.2", 00:24:43.945 "trsvcid": "4420" 00:24:43.945 }, 00:24:43.945 "peer_address": { 00:24:43.945 "trtype": "TCP", 00:24:43.945 "adrfam": "IPv4", 00:24:43.945 "traddr": "10.0.0.1", 00:24:43.945 "trsvcid": "41132" 00:24:43.945 }, 00:24:43.945 "auth": { 00:24:43.945 "state": "completed", 00:24:43.945 "digest": "sha512", 00:24:43.945 "dhgroup": "ffdhe8192" 00:24:43.945 } 00:24:43.945 } 00:24:43.945 ]' 00:24:43.945 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:24:43.945 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:43.945 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:24:43.945 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:43.945 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:24:44.206 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:44.206 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:44.206 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:44.206 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:24:44.206 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:45.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:45.150 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:45.411 request: 00:24:45.411 { 00:24:45.411 "name": "nvme0", 00:24:45.411 "trtype": "tcp", 00:24:45.411 "traddr": "10.0.0.2", 00:24:45.411 "adrfam": "ipv4", 00:24:45.411 "trsvcid": "4420", 00:24:45.411 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:45.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:45.411 "prchk_reftag": false, 00:24:45.411 "prchk_guard": false, 00:24:45.411 "hdgst": false, 00:24:45.411 "ddgst": false, 00:24:45.411 "dhchap_key": "key3", 00:24:45.411 "allow_unrecognized_csi": false, 00:24:45.411 "method": "bdev_nvme_attach_controller", 00:24:45.411 "req_id": 1 00:24:45.411 } 00:24:45.411 Got JSON-RPC error response 00:24:45.411 response: 00:24:45.411 { 00:24:45.411 "code": -5, 00:24:45.411 "message": "Input/output error" 00:24:45.411 } 00:24:45.411 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:45.411 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:45.411 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:45.411 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:45.411 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:24:45.411 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:24:45.411 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:45.411 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:24:45.689 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:24:45.690 request: 00:24:45.690 { 00:24:45.690 "name": "nvme0", 00:24:45.690 "trtype": "tcp", 00:24:45.690 "traddr": "10.0.0.2", 00:24:45.690 "adrfam": "ipv4", 00:24:45.690 "trsvcid": "4420", 00:24:45.690 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:45.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:45.690 "prchk_reftag": false, 00:24:45.690 "prchk_guard": false, 00:24:45.690 "hdgst": false, 00:24:45.690 "ddgst": false, 00:24:45.690 "dhchap_key": "key3", 00:24:45.690 "allow_unrecognized_csi": false, 00:24:45.690 "method": "bdev_nvme_attach_controller", 00:24:45.690 "req_id": 1 00:24:45.690 } 00:24:45.690 Got JSON-RPC error response 00:24:45.690 response: 00:24:45.690 { 00:24:45.690 "code": -5, 00:24:45.690 "message": "Input/output error" 00:24:45.690 } 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:45.690 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:45.951 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:45.951 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.951 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.951 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.951 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:45.951 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.951 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.951 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.952 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:45.952 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:45.952 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:45.952 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:45.952 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:45.952 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:45.952 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:45.952 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:45.952 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:45.952 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:46.212 request: 00:24:46.212 { 00:24:46.212 "name": "nvme0", 00:24:46.212 "trtype": "tcp", 00:24:46.212 "traddr": "10.0.0.2", 00:24:46.212 "adrfam": "ipv4", 00:24:46.212 "trsvcid": "4420", 00:24:46.212 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:46.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:46.212 "prchk_reftag": false, 00:24:46.212 "prchk_guard": false, 00:24:46.212 "hdgst": false, 00:24:46.212 "ddgst": false, 00:24:46.212 "dhchap_key": "key0", 00:24:46.212 "dhchap_ctrlr_key": "key1", 00:24:46.212 "allow_unrecognized_csi": false, 00:24:46.212 "method": "bdev_nvme_attach_controller", 00:24:46.212 "req_id": 1 00:24:46.212 } 00:24:46.212 Got JSON-RPC error response 00:24:46.212 response: 00:24:46.212 { 00:24:46.212 "code": -5, 00:24:46.212 "message": "Input/output error" 00:24:46.212 } 00:24:46.212 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:46.212 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:46.212 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:46.213 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:46.213 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:24:46.213 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:24:46.213 14:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:24:46.474 nvme0n1 00:24:46.474 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:24:46.474 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:24:46.474 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:46.735 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.735 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:46.735 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:46.996 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:24:46.996 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.996 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.996 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.996 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:46.996 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:46.996 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:47.937 nvme0n1 00:24:47.937 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:24:47.937 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:24:47.937 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:47.937 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.937 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:47.937 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.937 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.938 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.938 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:24:47.938 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:24:47.938 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:48.198 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.198 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:24:48.198 14:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: --dhchap-ctrl-secret DHHC-1:03:ZjYwOGMyNTIxMGM0MWY2NDBhYTJkNDcyNjAyNGYyNzI4MDYyZGQ5NTgxMTY4Zjk3YjlmODg4OWNiOTliYWMwOZ3j6k8=: 00:24:48.769 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:24:48.769 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:24:48.770 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:24:48.770 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:24:48.770 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:24:48.770 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:24:48.770 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:24:48.770 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:48.770 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:49.031 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:24:49.031 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:49.031 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:24:49.031 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:24:49.031 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:49.031 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:24:49.031 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:49.031 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:49.031 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:49.031 14:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:49.602 request: 00:24:49.602 { 00:24:49.602 "name": "nvme0", 00:24:49.602 "trtype": "tcp", 00:24:49.602 "traddr": "10.0.0.2", 00:24:49.602 "adrfam": "ipv4", 00:24:49.602 "trsvcid": "4420", 00:24:49.602 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:49.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:24:49.602 "prchk_reftag": false, 00:24:49.602 "prchk_guard": false, 00:24:49.602 "hdgst": false, 00:24:49.602 "ddgst": false, 00:24:49.602 "dhchap_key": "key1", 00:24:49.602 "allow_unrecognized_csi": false, 00:24:49.602 "method": "bdev_nvme_attach_controller", 00:24:49.602 "req_id": 1 00:24:49.602 } 00:24:49.602 Got JSON-RPC error response 00:24:49.602 response: 00:24:49.602 { 00:24:49.602 "code": -5, 00:24:49.602 "message": "Input/output error" 00:24:49.602 } 00:24:49.602 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:49.602 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:49.602 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:49.602 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:49.602 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:49.602 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:49.602 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:50.544 nvme0n1 00:24:50.544 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:24:50.544 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:24:50.544 14:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:50.544 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.544 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:50.544 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:50.805 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:50.805 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.805 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.805 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.805 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:24:50.805 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:50.805 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:50.805 nvme0n1 00:24:51.065 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:24:51.065 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:24:51.065 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:51.065 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.065 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:51.065 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:51.325 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:51.325 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.325 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.325 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.325 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: '' 2s 00:24:51.325 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:51.325 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:51.325 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: 00:24:51.325 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:24:51.325 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:51.325 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:51.325 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: ]] 00:24:51.325 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MjI3NTcwMzVmZWUwZTc5NWI1YjcyYjFhNmNhNDhmZmYDzNfu: 00:24:51.325 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:24:51.325 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:51.326 14:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:53.234 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:24:53.234 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:24:53.234 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:24:53.234 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: 2s 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: ]] 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MTRiZGYwYjkzYTMxMmYyMWQ3OGVjODk3NzMwNTljZTJhZDY4NTNjNGE1MjhhNzYz58cYnQ==: 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:53.495 14:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:55.408 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:24:55.408 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:24:55.408 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:24:55.408 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:24:55.408 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:24:55.408 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:24:55.408 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:24:55.408 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:55.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:55.409 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:55.409 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.409 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.409 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.409 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:55.409 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:55.409 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:56.352 nvme0n1 00:24:56.352 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:56.352 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.352 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.352 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.352 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:56.352 14:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:56.922 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:24:56.922 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:56.922 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:24:57.182 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.182 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:57.182 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.182 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.182 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.182 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:24:57.183 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:24:57.183 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:24:57.183 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:24:57.183 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:57.444 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.444 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:57.444 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.444 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.444 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.444 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:57.444 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:24:57.444 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:57.444 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:24:57.444 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:57.444 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:24:57.445 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:57.445 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:57.445 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:58.016 request: 00:24:58.016 { 00:24:58.016 "name": "nvme0", 00:24:58.016 "dhchap_key": "key1", 00:24:58.016 "dhchap_ctrlr_key": "key3", 00:24:58.016 "method": "bdev_nvme_set_keys", 00:24:58.016 "req_id": 1 00:24:58.016 } 00:24:58.016 Got JSON-RPC error response 00:24:58.016 response: 00:24:58.016 { 00:24:58.016 "code": -13, 00:24:58.016 "message": "Permission denied" 00:24:58.016 } 00:24:58.016 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:24:58.016 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:58.016 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:58.016 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:58.016 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:58.017 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:58.017 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:58.017 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:24:58.017 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:24:59.400 14:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:59.400 14:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:59.400 14:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:59.400 14:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:24:59.400 14:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:59.400 14:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.400 14:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.400 14:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.400 14:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:59.400 14:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:59.400 14:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:00.343 nvme0n1 00:25:00.344 14:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:25:00.344 14:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.344 14:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.344 14:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.344 14:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:25:00.344 14:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:25:00.344 14:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:25:00.344 14:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:25:00.344 14:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.344 14:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:25:00.344 14:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.344 14:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:25:00.344 14:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:25:00.604 request: 00:25:00.604 { 00:25:00.604 "name": "nvme0", 00:25:00.604 "dhchap_key": "key2", 00:25:00.604 "dhchap_ctrlr_key": "key0", 00:25:00.604 "method": "bdev_nvme_set_keys", 00:25:00.604 "req_id": 1 00:25:00.604 } 00:25:00.604 Got JSON-RPC error response 00:25:00.604 response: 00:25:00.604 { 00:25:00.604 "code": -13, 00:25:00.604 "message": "Permission denied" 00:25:00.604 } 00:25:00.604 14:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:25:00.604 14:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:00.604 14:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:00.604 14:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:00.604 14:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:25:00.604 14:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:25:00.604 14:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:00.865 14:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:25:00.865 14:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:25:01.807 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:25:01.807 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:25:01.807 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:02.068 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:25:02.068 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:25:02.068 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:25:02.068 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3018578 00:25:02.068 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3018578 ']' 00:25:02.068 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3018578 00:25:02.068 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:25:02.068 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:02.068 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3018578 00:25:02.068 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:02.068 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:02.068 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3018578' 00:25:02.068 killing process with pid 3018578 00:25:02.068 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3018578 00:25:02.068 14:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3018578 00:25:03.451 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:25:03.451 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:03.451 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:25:03.451 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:03.451 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:25:03.451 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:03.451 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:03.451 rmmod nvme_tcp 00:25:03.451 rmmod nvme_fabrics 00:25:03.451 rmmod nvme_keyring 00:25:03.451 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.451 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:25:03.451 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:25:03.451 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 3046207 ']' 00:25:03.451 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 3046207 00:25:03.451 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3046207 ']' 00:25:03.451 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3046207 00:25:03.451 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:25:03.451 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:03.451 14:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3046207 00:25:03.451 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:03.451 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:03.451 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3046207' 00:25:03.451 killing process with pid 3046207 00:25:03.451 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3046207 00:25:03.451 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3046207 00:25:04.394 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:04.394 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:04.394 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:04.394 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:25:04.394 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:25:04.394 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:04.394 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:25:04.394 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:04.394 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:04.394 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.394 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.394 14:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.306 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:06.306 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Ryi /tmp/spdk.key-sha256.6Jd /tmp/spdk.key-sha384.s6i /tmp/spdk.key-sha512.y9T /tmp/spdk.key-sha512.0og /tmp/spdk.key-sha384.kmx /tmp/spdk.key-sha256.OKB '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:25:06.306 00:25:06.306 real 2m48.539s 00:25:06.306 user 6m13.474s 00:25:06.306 sys 0m24.720s 00:25:06.306 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:06.306 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.306 ************************************ 00:25:06.306 END TEST nvmf_auth_target 00:25:06.306 ************************************ 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:06.567 ************************************ 00:25:06.567 START TEST nvmf_bdevio_no_huge 00:25:06.567 ************************************ 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:25:06.567 * Looking for test storage... 00:25:06.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:25:06.567 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:06.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.829 --rc genhtml_branch_coverage=1 00:25:06.829 --rc genhtml_function_coverage=1 00:25:06.829 --rc genhtml_legend=1 00:25:06.829 --rc geninfo_all_blocks=1 00:25:06.829 --rc geninfo_unexecuted_blocks=1 00:25:06.829 00:25:06.829 ' 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:06.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.829 --rc genhtml_branch_coverage=1 00:25:06.829 --rc genhtml_function_coverage=1 00:25:06.829 --rc genhtml_legend=1 00:25:06.829 --rc geninfo_all_blocks=1 00:25:06.829 --rc geninfo_unexecuted_blocks=1 00:25:06.829 00:25:06.829 ' 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:06.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.829 --rc genhtml_branch_coverage=1 00:25:06.829 --rc genhtml_function_coverage=1 00:25:06.829 --rc genhtml_legend=1 00:25:06.829 --rc geninfo_all_blocks=1 00:25:06.829 --rc geninfo_unexecuted_blocks=1 00:25:06.829 00:25:06.829 ' 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:06.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.829 --rc genhtml_branch_coverage=1 00:25:06.829 --rc genhtml_function_coverage=1 00:25:06.829 --rc genhtml_legend=1 00:25:06.829 --rc geninfo_all_blocks=1 00:25:06.829 --rc geninfo_unexecuted_blocks=1 00:25:06.829 00:25:06.829 ' 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.829 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:06.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:25:06.830 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:14.972 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:14.972 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:14.972 Found net devices under 0000:31:00.0: cvl_0_0 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:14.972 Found net devices under 0000:31:00.1: cvl_0_1 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:14.972 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:14.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:25:14.972 00:25:14.972 --- 10.0.0.2 ping statistics --- 00:25:14.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.972 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:25:14.973 00:25:14.973 --- 10.0.0.1 ping statistics --- 00:25:14.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.973 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=3055085 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 3055085 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 3055085 ']' 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:14.973 14:35:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:14.973 [2024-10-07 14:35:38.020097] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:25:14.973 [2024-10-07 14:35:38.020209] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:25:14.973 [2024-10-07 14:35:38.183609] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.973 [2024-10-07 14:35:38.394838] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.973 [2024-10-07 14:35:38.394904] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.973 [2024-10-07 14:35:38.394917] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.973 [2024-10-07 14:35:38.394930] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.973 [2024-10-07 14:35:38.394940] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.973 [2024-10-07 14:35:38.397534] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:25:14.973 [2024-10-07 14:35:38.397668] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:25:14.973 [2024-10-07 14:35:38.397772] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.973 [2024-10-07 14:35:38.397801] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:15.234 [2024-10-07 14:35:38.851593] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:15.234 Malloc0 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.234 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:15.496 [2024-10-07 14:35:38.946138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.496 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.496 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:25:15.496 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:25:15.496 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:25:15.496 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:25:15.496 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:25:15.496 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:25:15.496 { 00:25:15.496 "params": { 00:25:15.496 "name": "Nvme$subsystem", 00:25:15.496 "trtype": "$TEST_TRANSPORT", 00:25:15.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.496 "adrfam": "ipv4", 00:25:15.496 "trsvcid": "$NVMF_PORT", 00:25:15.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.496 "hdgst": ${hdgst:-false}, 00:25:15.496 "ddgst": ${ddgst:-false} 00:25:15.496 }, 00:25:15.496 "method": "bdev_nvme_attach_controller" 00:25:15.497 } 00:25:15.497 EOF 00:25:15.497 )") 00:25:15.497 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:25:15.497 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:25:15.497 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:25:15.497 14:35:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:25:15.497 "params": { 00:25:15.497 "name": "Nvme1", 00:25:15.497 "trtype": "tcp", 00:25:15.497 "traddr": "10.0.0.2", 00:25:15.497 "adrfam": "ipv4", 00:25:15.497 "trsvcid": "4420", 00:25:15.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:15.497 "hdgst": false, 00:25:15.497 "ddgst": false 00:25:15.497 }, 00:25:15.497 "method": "bdev_nvme_attach_controller" 00:25:15.497 }' 00:25:15.497 [2024-10-07 14:35:39.046817] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:25:15.497 [2024-10-07 14:35:39.046950] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3055194 ] 00:25:15.497 [2024-10-07 14:35:39.198325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:15.757 [2024-10-07 14:35:39.399107] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.757 [2024-10-07 14:35:39.399335] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.757 [2024-10-07 14:35:39.399338] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.329 I/O targets: 00:25:16.329 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:25:16.329 00:25:16.329 00:25:16.329 CUnit - A unit testing framework for C - Version 2.1-3 00:25:16.329 http://cunit.sourceforge.net/ 00:25:16.329 00:25:16.329 00:25:16.329 Suite: bdevio tests on: Nvme1n1 00:25:16.329 Test: blockdev write read block ...passed 00:25:16.329 Test: blockdev write zeroes read block ...passed 00:25:16.329 Test: blockdev write zeroes read no split ...passed 00:25:16.329 Test: blockdev write zeroes read split ...passed 00:25:16.329 Test: blockdev write zeroes read split partial ...passed 00:25:16.329 Test: blockdev reset ...[2024-10-07 14:35:39.995069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:16.329 [2024-10-07 14:35:39.995184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039bf00 (9): Bad file descriptor 00:25:16.589 [2024-10-07 14:35:40.098331] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:16.589 passed 00:25:16.589 Test: blockdev write read 8 blocks ...passed 00:25:16.589 Test: blockdev write read size > 128k ...passed 00:25:16.589 Test: blockdev write read invalid size ...passed 00:25:16.589 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:16.589 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:16.589 Test: blockdev write read max offset ...passed 00:25:16.589 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:16.589 Test: blockdev writev readv 8 blocks ...passed 00:25:16.590 Test: blockdev writev readv 30 x 1block ...passed 00:25:16.851 Test: blockdev writev readv block ...passed 00:25:16.851 Test: blockdev writev readv size > 128k ...passed 00:25:16.851 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:16.851 Test: blockdev comparev and writev ...[2024-10-07 14:35:40.365542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:16.851 [2024-10-07 14:35:40.365587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.851 [2024-10-07 14:35:40.365605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:16.851 [2024-10-07 14:35:40.365614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.851 [2024-10-07 14:35:40.366158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:16.851 [2024-10-07 14:35:40.366173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:16.851 [2024-10-07 14:35:40.366186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:16.851 [2024-10-07 14:35:40.366194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:16.851 [2024-10-07 14:35:40.366737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:16.851 [2024-10-07 14:35:40.366752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:16.851 [2024-10-07 14:35:40.366767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:16.851 [2024-10-07 14:35:40.366775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:16.851 [2024-10-07 14:35:40.367309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:16.851 [2024-10-07 14:35:40.367324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:16.851 [2024-10-07 14:35:40.367341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:16.851 [2024-10-07 14:35:40.367350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:16.851 passed 00:25:16.851 Test: blockdev nvme passthru rw ...passed 00:25:16.851 Test: blockdev nvme passthru vendor specific ...[2024-10-07 14:35:40.450659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:16.851 [2024-10-07 14:35:40.450682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:16.851 [2024-10-07 14:35:40.451056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:16.851 [2024-10-07 14:35:40.451069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:16.851 [2024-10-07 14:35:40.451346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:16.851 [2024-10-07 14:35:40.451358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:16.851 [2024-10-07 14:35:40.451741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:16.851 [2024-10-07 14:35:40.451753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:16.851 passed 00:25:16.851 Test: blockdev nvme admin passthru ...passed 00:25:16.851 Test: blockdev copy ...passed 00:25:16.851 00:25:16.851 Run Summary: Type Total Ran Passed Failed Inactive 00:25:16.851 suites 1 1 n/a 0 0 00:25:16.851 tests 23 23 23 0 0 00:25:16.851 asserts 152 152 152 0 n/a 00:25:16.851 00:25:16.851 Elapsed time = 1.533 seconds 00:25:17.422 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:17.422 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.422 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:17.422 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.422 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:25:17.422 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:25:17.422 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:17.422 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:25:17.422 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:17.422 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:25:17.422 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:17.422 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:17.422 rmmod nvme_tcp 00:25:17.683 rmmod nvme_fabrics 00:25:17.683 rmmod nvme_keyring 00:25:17.683 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:17.683 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:25:17.683 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:25:17.683 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 3055085 ']' 00:25:17.683 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 3055085 00:25:17.683 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 3055085 ']' 00:25:17.683 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 3055085 00:25:17.683 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:25:17.683 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:17.683 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3055085 00:25:17.683 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:25:17.683 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:25:17.683 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3055085' 00:25:17.683 killing process with pid 3055085 00:25:17.683 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 3055085 00:25:17.683 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 3055085 00:25:18.253 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:18.253 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:18.253 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:18.253 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:25:18.253 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:18.253 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:25:18.253 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:25:18.253 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:18.253 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:18.253 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.253 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.253 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.166 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:20.166 00:25:20.166 real 0m13.755s 00:25:20.166 user 0m19.005s 00:25:20.166 sys 0m6.991s 00:25:20.166 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:20.166 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:20.166 ************************************ 00:25:20.166 END TEST nvmf_bdevio_no_huge 00:25:20.166 ************************************ 00:25:20.427 14:35:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:20.427 14:35:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:20.427 14:35:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:20.427 14:35:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:20.427 ************************************ 00:25:20.427 START TEST nvmf_tls 00:25:20.427 ************************************ 00:25:20.427 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:20.427 * Looking for test storage... 00:25:20.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:20.427 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:20.427 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:25:20.427 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:20.427 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:20.427 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:20.427 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:20.427 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:20.427 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:25:20.427 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:25:20.427 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:25:20.427 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:25:20.427 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:25:20.427 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:25:20.427 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:25:20.427 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:20.427 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:20.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.428 --rc genhtml_branch_coverage=1 00:25:20.428 --rc genhtml_function_coverage=1 00:25:20.428 --rc genhtml_legend=1 00:25:20.428 --rc geninfo_all_blocks=1 00:25:20.428 --rc geninfo_unexecuted_blocks=1 00:25:20.428 00:25:20.428 ' 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:20.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.428 --rc genhtml_branch_coverage=1 00:25:20.428 --rc genhtml_function_coverage=1 00:25:20.428 --rc genhtml_legend=1 00:25:20.428 --rc geninfo_all_blocks=1 00:25:20.428 --rc geninfo_unexecuted_blocks=1 00:25:20.428 00:25:20.428 ' 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:20.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.428 --rc genhtml_branch_coverage=1 00:25:20.428 --rc genhtml_function_coverage=1 00:25:20.428 --rc genhtml_legend=1 00:25:20.428 --rc geninfo_all_blocks=1 00:25:20.428 --rc geninfo_unexecuted_blocks=1 00:25:20.428 00:25:20.428 ' 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:20.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.428 --rc genhtml_branch_coverage=1 00:25:20.428 --rc genhtml_function_coverage=1 00:25:20.428 --rc genhtml_legend=1 00:25:20.428 --rc geninfo_all_blocks=1 00:25:20.428 --rc geninfo_unexecuted_blocks=1 00:25:20.428 00:25:20.428 ' 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.428 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:20.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:20.689 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.690 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:20.690 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:20.690 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:20.690 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.690 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.690 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.690 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:20.690 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:20.690 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:25:20.690 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:28.833 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:28.833 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:28.833 Found net devices under 0000:31:00.0: cvl_0_0 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:28.833 Found net devices under 0000:31:00.1: cvl_0_1 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:28.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:25:28.833 00:25:28.833 --- 10.0.0.2 ping statistics --- 00:25:28.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.833 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:25:28.833 00:25:28.833 --- 10.0.0.1 ping statistics --- 00:25:28.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.833 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:28.833 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:28.834 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:25:28.834 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:28.834 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:28.834 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:28.834 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3060101 00:25:28.834 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3060101 00:25:28.834 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:25:28.834 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3060101 ']' 00:25:28.834 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.834 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:28.834 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.834 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:28.834 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:28.834 [2024-10-07 14:35:51.825221] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:25:28.834 [2024-10-07 14:35:51.825361] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.834 [2024-10-07 14:35:51.984764] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.834 [2024-10-07 14:35:52.209958] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.834 [2024-10-07 14:35:52.210043] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.834 [2024-10-07 14:35:52.210056] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.834 [2024-10-07 14:35:52.210070] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.834 [2024-10-07 14:35:52.210081] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.834 [2024-10-07 14:35:52.211529] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.095 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:29.095 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:29.095 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:29.095 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:29.095 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:29.095 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.095 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:25:29.095 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:25:29.356 true 00:25:29.356 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:29.356 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:25:29.356 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:25:29.356 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:25:29.356 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:29.616 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:25:29.616 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:29.877 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:25:29.877 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:25:29.877 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:25:29.877 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:29.877 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:25:30.138 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:25:30.138 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:25:30.138 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:30.138 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:25:30.399 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:25:30.399 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:25:30.399 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:25:30.659 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:30.660 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:25:30.660 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:25:30.660 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:25:30.660 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:25:30.920 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:30.920 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.deuSSIW9HJ 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.uCGnxUiiem 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.deuSSIW9HJ 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.uCGnxUiiem 00:25:31.181 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:31.442 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:25:31.703 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.deuSSIW9HJ 00:25:31.703 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.deuSSIW9HJ 00:25:31.703 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:31.969 [2024-10-07 14:35:55.427841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.969 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:31.969 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:32.231 [2024-10-07 14:35:55.748637] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:32.231 [2024-10-07 14:35:55.748872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.231 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:32.231 malloc0 00:25:32.492 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:32.492 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.deuSSIW9HJ 00:25:32.752 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:32.752 14:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.deuSSIW9HJ 00:25:44.987 Initializing NVMe Controllers 00:25:44.987 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:44.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:44.987 Initialization complete. Launching workers. 00:25:44.987 ======================================================== 00:25:44.987 Latency(us) 00:25:44.987 Device Information : IOPS MiB/s Average min max 00:25:44.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15365.70 60.02 4165.23 1616.27 5247.63 00:25:44.987 ======================================================== 00:25:44.987 Total : 15365.70 60.02 4165.23 1616.27 5247.63 00:25:44.987 00:25:44.987 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.deuSSIW9HJ 00:25:44.987 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:44.987 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:44.987 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:44.987 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.deuSSIW9HJ 00:25:44.987 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:44.987 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3063020 00:25:44.987 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:44.987 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3063020 /var/tmp/bdevperf.sock 00:25:44.987 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:44.987 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3063020 ']' 00:25:44.987 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:44.987 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:44.987 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:44.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:44.987 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:44.987 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:44.987 [2024-10-07 14:36:06.718620] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:25:44.987 [2024-10-07 14:36:06.718734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3063020 ] 00:25:44.987 [2024-10-07 14:36:06.819267] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.987 [2024-10-07 14:36:06.954631] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:44.987 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:44.987 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:44.987 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.deuSSIW9HJ 00:25:44.987 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:44.987 [2024-10-07 14:36:07.775756] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:44.987 TLSTESTn1 00:25:44.987 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:44.987 Running I/O for 10 seconds... 00:25:46.502 3419.00 IOPS, 13.36 MiB/s [2024-10-07T12:36:11.155Z] 4222.00 IOPS, 16.49 MiB/s [2024-10-07T12:36:12.097Z] 4548.33 IOPS, 17.77 MiB/s [2024-10-07T12:36:13.041Z] 4593.75 IOPS, 17.94 MiB/s [2024-10-07T12:36:13.983Z] 4475.80 IOPS, 17.48 MiB/s [2024-10-07T12:36:15.367Z] 4592.50 IOPS, 17.94 MiB/s [2024-10-07T12:36:16.309Z] 4650.43 IOPS, 18.17 MiB/s [2024-10-07T12:36:17.251Z] 4624.50 IOPS, 18.06 MiB/s [2024-10-07T12:36:18.193Z] 4671.33 IOPS, 18.25 MiB/s [2024-10-07T12:36:18.193Z] 4658.90 IOPS, 18.20 MiB/s 00:25:54.484 Latency(us) 00:25:54.484 [2024-10-07T12:36:18.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.484 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:54.484 Verification LBA range: start 0x0 length 0x2000 00:25:54.484 TLSTESTn1 : 10.02 4663.23 18.22 0.00 0.00 27410.43 7263.57 92187.31 00:25:54.484 [2024-10-07T12:36:18.193Z] =================================================================================================================== 00:25:54.484 [2024-10-07T12:36:18.193Z] Total : 4663.23 18.22 0.00 0.00 27410.43 7263.57 92187.31 00:25:54.484 { 00:25:54.484 "results": [ 00:25:54.484 { 00:25:54.484 "job": "TLSTESTn1", 00:25:54.484 "core_mask": "0x4", 00:25:54.484 "workload": "verify", 00:25:54.484 "status": "finished", 00:25:54.484 "verify_range": { 00:25:54.484 "start": 0, 00:25:54.484 "length": 8192 00:25:54.484 }, 00:25:54.484 "queue_depth": 128, 00:25:54.484 "io_size": 4096, 00:25:54.484 "runtime": 10.018153, 00:25:54.484 "iops": 4663.234829813439, 00:25:54.484 "mibps": 18.215761053958747, 00:25:54.484 "io_failed": 0, 00:25:54.484 "io_timeout": 0, 00:25:54.484 "avg_latency_us": 27410.427557705618, 00:25:54.484 "min_latency_us": 7263.573333333334, 00:25:54.484 "max_latency_us": 92187.30666666667 00:25:54.484 } 00:25:54.484 ], 00:25:54.484 "core_count": 1 00:25:54.484 } 00:25:54.484 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:54.484 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3063020 00:25:54.484 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3063020 ']' 00:25:54.484 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3063020 00:25:54.484 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:54.484 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:54.484 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3063020 00:25:54.484 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:54.484 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:54.484 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3063020' 00:25:54.484 killing process with pid 3063020 00:25:54.484 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3063020 00:25:54.484 Received shutdown signal, test time was about 10.000000 seconds 00:25:54.484 00:25:54.484 Latency(us) 00:25:54.484 [2024-10-07T12:36:18.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.484 [2024-10-07T12:36:18.193Z] =================================================================================================================== 00:25:54.484 [2024-10-07T12:36:18.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:54.484 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3063020 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uCGnxUiiem 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uCGnxUiiem 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uCGnxUiiem 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.uCGnxUiiem 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3065812 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3065812 /var/tmp/bdevperf.sock 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3065812 ']' 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:55.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:55.075 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:55.075 [2024-10-07 14:36:18.716408] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:25:55.075 [2024-10-07 14:36:18.716520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3065812 ] 00:25:55.368 [2024-10-07 14:36:18.817241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.368 [2024-10-07 14:36:18.953601] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.983 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:55.983 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:55.983 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uCGnxUiiem 00:25:55.983 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:56.269 [2024-10-07 14:36:19.774383] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:56.269 [2024-10-07 14:36:19.784627] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:56.269 [2024-10-07 14:36:19.785468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (107): Transport endpoint is not connected 00:25:56.269 [2024-10-07 14:36:19.786452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:25:56.269 [2024-10-07 14:36:19.787456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.269 [2024-10-07 14:36:19.787475] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:56.269 [2024-10-07 14:36:19.787487] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:25:56.269 [2024-10-07 14:36:19.787498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.269 request: 00:25:56.269 { 00:25:56.269 "name": "TLSTEST", 00:25:56.269 "trtype": "tcp", 00:25:56.269 "traddr": "10.0.0.2", 00:25:56.269 "adrfam": "ipv4", 00:25:56.269 "trsvcid": "4420", 00:25:56.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:56.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:56.269 "prchk_reftag": false, 00:25:56.269 "prchk_guard": false, 00:25:56.269 "hdgst": false, 00:25:56.269 "ddgst": false, 00:25:56.269 "psk": "key0", 00:25:56.269 "allow_unrecognized_csi": false, 00:25:56.269 "method": "bdev_nvme_attach_controller", 00:25:56.269 "req_id": 1 00:25:56.269 } 00:25:56.269 Got JSON-RPC error response 00:25:56.269 response: 00:25:56.269 { 00:25:56.269 "code": -5, 00:25:56.269 "message": "Input/output error" 00:25:56.269 } 00:25:56.269 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3065812 00:25:56.269 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3065812 ']' 00:25:56.269 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3065812 00:25:56.269 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:56.269 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:56.269 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3065812 00:25:56.269 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:56.269 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:56.269 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3065812' 00:25:56.269 killing process with pid 3065812 00:25:56.269 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3065812 00:25:56.269 Received shutdown signal, test time was about 10.000000 seconds 00:25:56.269 00:25:56.269 Latency(us) 00:25:56.269 [2024-10-07T12:36:19.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.269 [2024-10-07T12:36:19.978Z] =================================================================================================================== 00:25:56.270 [2024-10-07T12:36:19.979Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:56.270 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3065812 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.deuSSIW9HJ 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.deuSSIW9HJ 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.deuSSIW9HJ 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.deuSSIW9HJ 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3066170 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3066170 /var/tmp/bdevperf.sock 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3066170 ']' 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:56.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:56.843 14:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:56.843 [2024-10-07 14:36:20.465273] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:25:56.843 [2024-10-07 14:36:20.465384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3066170 ] 00:25:57.103 [2024-10-07 14:36:20.574720] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.103 [2024-10-07 14:36:20.709768] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.674 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:57.674 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:57.674 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.deuSSIW9HJ 00:25:57.935 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:25:57.935 [2024-10-07 14:36:21.558669] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:57.935 [2024-10-07 14:36:21.568802] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:57.935 [2024-10-07 14:36:21.568830] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:57.935 [2024-10-07 14:36:21.568859] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:57.935 [2024-10-07 14:36:21.568961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (107): Transport endpoint is not connected 00:25:57.935 [2024-10-07 14:36:21.569935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:25:57.935 [2024-10-07 14:36:21.570931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:57.935 [2024-10-07 14:36:21.570950] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:57.935 [2024-10-07 14:36:21.570960] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:25:57.935 [2024-10-07 14:36:21.570973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:57.935 request: 00:25:57.935 { 00:25:57.935 "name": "TLSTEST", 00:25:57.935 "trtype": "tcp", 00:25:57.935 "traddr": "10.0.0.2", 00:25:57.935 "adrfam": "ipv4", 00:25:57.935 "trsvcid": "4420", 00:25:57.935 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:57.935 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:57.935 "prchk_reftag": false, 00:25:57.935 "prchk_guard": false, 00:25:57.935 "hdgst": false, 00:25:57.935 "ddgst": false, 00:25:57.935 "psk": "key0", 00:25:57.935 "allow_unrecognized_csi": false, 00:25:57.935 "method": "bdev_nvme_attach_controller", 00:25:57.935 "req_id": 1 00:25:57.935 } 00:25:57.935 Got JSON-RPC error response 00:25:57.935 response: 00:25:57.935 { 00:25:57.935 "code": -5, 00:25:57.935 "message": "Input/output error" 00:25:57.935 } 00:25:57.935 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3066170 00:25:57.935 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3066170 ']' 00:25:57.935 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3066170 00:25:57.935 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:57.935 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:57.935 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3066170 00:25:58.196 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:58.196 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:58.196 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3066170' 00:25:58.196 killing process with pid 3066170 00:25:58.196 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3066170 00:25:58.196 Received shutdown signal, test time was about 10.000000 seconds 00:25:58.196 00:25:58.196 Latency(us) 00:25:58.196 [2024-10-07T12:36:21.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.196 [2024-10-07T12:36:21.905Z] =================================================================================================================== 00:25:58.196 [2024-10-07T12:36:21.905Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:58.196 14:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3066170 00:25:58.457 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:58.457 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:25:58.457 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:58.457 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:58.457 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:58.457 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.deuSSIW9HJ 00:25:58.457 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:25:58.457 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.deuSSIW9HJ 00:25:58.457 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:25:58.457 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:58.457 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:25:58.457 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:58.457 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.deuSSIW9HJ 00:25:58.457 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:58.457 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:25:58.457 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:58.457 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.deuSSIW9HJ 00:25:58.457 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:58.718 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3066513 00:25:58.718 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:58.718 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3066513 /var/tmp/bdevperf.sock 00:25:58.718 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:58.718 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3066513 ']' 00:25:58.718 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:58.718 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:58.718 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:58.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:58.719 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:58.719 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:58.719 [2024-10-07 14:36:22.242045] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:25:58.719 [2024-10-07 14:36:22.242156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3066513 ] 00:25:58.719 [2024-10-07 14:36:22.345368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.979 [2024-10-07 14:36:22.481173] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:59.551 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:59.551 14:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:25:59.552 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.deuSSIW9HJ 00:25:59.552 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:59.813 [2024-10-07 14:36:23.302054] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:59.813 [2024-10-07 14:36:23.313085] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:59.813 [2024-10-07 14:36:23.313111] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:59.813 [2024-10-07 14:36:23.313137] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:59.813 [2024-10-07 14:36:23.313208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (107): Transport endpoint is not connected 00:25:59.813 [2024-10-07 14:36:23.314179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:25:59.813 [2024-10-07 14:36:23.315183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:59.813 [2024-10-07 14:36:23.315202] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:59.813 [2024-10-07 14:36:23.315214] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:25:59.813 [2024-10-07 14:36:23.315225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:59.813 request: 00:25:59.813 { 00:25:59.813 "name": "TLSTEST", 00:25:59.813 "trtype": "tcp", 00:25:59.813 "traddr": "10.0.0.2", 00:25:59.813 "adrfam": "ipv4", 00:25:59.813 "trsvcid": "4420", 00:25:59.813 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:59.813 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:59.813 "prchk_reftag": false, 00:25:59.813 "prchk_guard": false, 00:25:59.813 "hdgst": false, 00:25:59.813 "ddgst": false, 00:25:59.813 "psk": "key0", 00:25:59.813 "allow_unrecognized_csi": false, 00:25:59.813 "method": "bdev_nvme_attach_controller", 00:25:59.813 "req_id": 1 00:25:59.813 } 00:25:59.813 Got JSON-RPC error response 00:25:59.813 response: 00:25:59.813 { 00:25:59.813 "code": -5, 00:25:59.813 "message": "Input/output error" 00:25:59.813 } 00:25:59.813 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3066513 00:25:59.813 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3066513 ']' 00:25:59.813 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3066513 00:25:59.813 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:25:59.813 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:59.813 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3066513 00:25:59.813 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:59.813 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:59.813 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3066513' 00:25:59.813 killing process with pid 3066513 00:25:59.813 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3066513 00:25:59.813 Received shutdown signal, test time was about 10.000000 seconds 00:25:59.813 00:25:59.813 Latency(us) 00:25:59.813 [2024-10-07T12:36:23.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.813 [2024-10-07T12:36:23.522Z] =================================================================================================================== 00:25:59.813 [2024-10-07T12:36:23.522Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:59.813 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3066513 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3066860 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3066860 /var/tmp/bdevperf.sock 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3066860 ']' 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:00.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:00.385 14:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:00.385 [2024-10-07 14:36:23.991446] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:26:00.385 [2024-10-07 14:36:23.991557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3066860 ] 00:26:00.385 [2024-10-07 14:36:24.091599] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.646 [2024-10-07 14:36:24.226955] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:01.218 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:01.218 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:01.218 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:26:01.218 [2024-10-07 14:36:24.895379] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:26:01.218 [2024-10-07 14:36:24.895413] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:01.218 request: 00:26:01.218 { 00:26:01.218 "name": "key0", 00:26:01.218 "path": "", 00:26:01.218 "method": "keyring_file_add_key", 00:26:01.218 "req_id": 1 00:26:01.218 } 00:26:01.218 Got JSON-RPC error response 00:26:01.218 response: 00:26:01.218 { 00:26:01.218 "code": -1, 00:26:01.218 "message": "Operation not permitted" 00:26:01.218 } 00:26:01.218 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:01.478 [2024-10-07 14:36:25.059881] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:01.478 [2024-10-07 14:36:25.059915] bdev_nvme.c:6412:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:26:01.478 request: 00:26:01.478 { 00:26:01.478 "name": "TLSTEST", 00:26:01.478 "trtype": "tcp", 00:26:01.478 "traddr": "10.0.0.2", 00:26:01.478 "adrfam": "ipv4", 00:26:01.478 "trsvcid": "4420", 00:26:01.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:01.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:01.478 "prchk_reftag": false, 00:26:01.478 "prchk_guard": false, 00:26:01.478 "hdgst": false, 00:26:01.478 "ddgst": false, 00:26:01.478 "psk": "key0", 00:26:01.478 "allow_unrecognized_csi": false, 00:26:01.478 "method": "bdev_nvme_attach_controller", 00:26:01.478 "req_id": 1 00:26:01.478 } 00:26:01.478 Got JSON-RPC error response 00:26:01.478 response: 00:26:01.478 { 00:26:01.478 "code": -126, 00:26:01.478 "message": "Required key not available" 00:26:01.478 } 00:26:01.478 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3066860 00:26:01.478 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3066860 ']' 00:26:01.478 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3066860 00:26:01.478 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:01.478 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:01.478 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3066860 00:26:01.478 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:01.478 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:01.478 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3066860' 00:26:01.478 killing process with pid 3066860 00:26:01.478 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3066860 00:26:01.478 Received shutdown signal, test time was about 10.000000 seconds 00:26:01.478 00:26:01.478 Latency(us) 00:26:01.478 [2024-10-07T12:36:25.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.478 [2024-10-07T12:36:25.187Z] =================================================================================================================== 00:26:01.478 [2024-10-07T12:36:25.187Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:01.478 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3066860 00:26:02.048 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:26:02.048 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:26:02.048 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:02.048 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:02.048 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:02.048 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3060101 00:26:02.048 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3060101 ']' 00:26:02.048 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3060101 00:26:02.048 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:02.048 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:02.048 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3060101 00:26:02.048 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:02.048 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:02.048 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3060101' 00:26:02.048 killing process with pid 3060101 00:26:02.048 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3060101 00:26:02.048 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3060101 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.awhQqek6oU 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.awhQqek6oU 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3067418 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3067418 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3067418 ']' 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:02.990 14:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:02.990 [2024-10-07 14:36:26.624064] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:26:02.990 [2024-10-07 14:36:26.624171] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.251 [2024-10-07 14:36:26.760181] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.251 [2024-10-07 14:36:26.895755] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.251 [2024-10-07 14:36:26.895802] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.251 [2024-10-07 14:36:26.895811] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.251 [2024-10-07 14:36:26.895819] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.251 [2024-10-07 14:36:26.895826] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.251 [2024-10-07 14:36:26.896749] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.823 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:03.823 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:03.823 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:03.823 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:03.823 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:03.823 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.823 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.awhQqek6oU 00:26:03.823 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.awhQqek6oU 00:26:03.823 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:04.084 [2024-10-07 14:36:27.558721] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.084 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:04.084 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:04.344 [2024-10-07 14:36:27.875521] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:04.344 [2024-10-07 14:36:27.875778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.344 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:04.605 malloc0 00:26:04.605 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:04.605 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.awhQqek6oU 00:26:04.871 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:26:04.871 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.awhQqek6oU 00:26:04.871 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:04.871 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:04.871 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:04.871 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.awhQqek6oU 00:26:04.871 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:04.871 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3067909 00:26:04.871 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:04.871 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3067909 /var/tmp/bdevperf.sock 00:26:04.871 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:04.871 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3067909 ']' 00:26:04.871 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:04.871 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:04.871 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:04.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:04.871 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:04.871 14:36:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:05.134 [2024-10-07 14:36:28.652832] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:26:05.134 [2024-10-07 14:36:28.652947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3067909 ] 00:26:05.134 [2024-10-07 14:36:28.757724] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.394 [2024-10-07 14:36:28.893662] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:05.965 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:05.965 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:05.965 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.awhQqek6oU 00:26:05.966 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:06.226 [2024-10-07 14:36:29.746500] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:06.226 TLSTESTn1 00:26:06.226 14:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:26:06.486 Running I/O for 10 seconds... 00:26:08.368 4024.00 IOPS, 15.72 MiB/s [2024-10-07T12:36:33.019Z] 4169.50 IOPS, 16.29 MiB/s [2024-10-07T12:36:33.962Z] 4192.33 IOPS, 16.38 MiB/s [2024-10-07T12:36:35.347Z] 4397.00 IOPS, 17.18 MiB/s [2024-10-07T12:36:36.288Z] 4524.60 IOPS, 17.67 MiB/s [2024-10-07T12:36:37.231Z] 4482.83 IOPS, 17.51 MiB/s [2024-10-07T12:36:38.172Z] 4528.57 IOPS, 17.69 MiB/s [2024-10-07T12:36:39.115Z] 4605.62 IOPS, 17.99 MiB/s [2024-10-07T12:36:40.056Z] 4576.00 IOPS, 17.88 MiB/s [2024-10-07T12:36:40.056Z] 4552.60 IOPS, 17.78 MiB/s 00:26:16.347 Latency(us) 00:26:16.347 [2024-10-07T12:36:40.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.347 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:16.347 Verification LBA range: start 0x0 length 0x2000 00:26:16.347 TLSTESTn1 : 10.03 4551.60 17.78 0.00 0.00 28068.59 5543.25 69468.16 00:26:16.347 [2024-10-07T12:36:40.056Z] =================================================================================================================== 00:26:16.347 [2024-10-07T12:36:40.056Z] Total : 4551.60 17.78 0.00 0.00 28068.59 5543.25 69468.16 00:26:16.347 { 00:26:16.347 "results": [ 00:26:16.347 { 00:26:16.347 "job": "TLSTESTn1", 00:26:16.347 "core_mask": "0x4", 00:26:16.347 "workload": "verify", 00:26:16.347 "status": "finished", 00:26:16.347 "verify_range": { 00:26:16.347 "start": 0, 00:26:16.347 "length": 8192 00:26:16.347 }, 00:26:16.347 "queue_depth": 128, 00:26:16.347 "io_size": 4096, 00:26:16.347 "runtime": 10.030105, 00:26:16.347 "iops": 4551.597415979195, 00:26:16.347 "mibps": 17.77967740616873, 00:26:16.347 "io_failed": 0, 00:26:16.347 "io_timeout": 0, 00:26:16.347 "avg_latency_us": 28068.591033520977, 00:26:16.347 "min_latency_us": 5543.253333333333, 00:26:16.347 "max_latency_us": 69468.16 00:26:16.347 } 00:26:16.347 ], 00:26:16.347 "core_count": 1 00:26:16.347 } 00:26:16.347 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:16.347 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3067909 00:26:16.347 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3067909 ']' 00:26:16.347 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3067909 00:26:16.347 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:16.347 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:16.347 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3067909 00:26:16.608 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:16.608 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:16.608 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3067909' 00:26:16.608 killing process with pid 3067909 00:26:16.608 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3067909 00:26:16.608 Received shutdown signal, test time was about 10.000000 seconds 00:26:16.608 00:26:16.608 Latency(us) 00:26:16.608 [2024-10-07T12:36:40.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.608 [2024-10-07T12:36:40.317Z] =================================================================================================================== 00:26:16.608 [2024-10-07T12:36:40.317Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:16.609 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3067909 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.awhQqek6oU 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.awhQqek6oU 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.awhQqek6oU 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.awhQqek6oU 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.awhQqek6oU 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3070239 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3070239 /var/tmp/bdevperf.sock 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3070239 ']' 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:17.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:17.180 14:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:17.180 [2024-10-07 14:36:40.718720] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:26:17.180 [2024-10-07 14:36:40.718833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070239 ] 00:26:17.180 [2024-10-07 14:36:40.819192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.441 [2024-10-07 14:36:40.955403] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:18.012 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:18.012 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:18.012 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.awhQqek6oU 00:26:18.012 [2024-10-07 14:36:41.643725] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.awhQqek6oU': 0100666 00:26:18.012 [2024-10-07 14:36:41.643756] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:18.012 request: 00:26:18.012 { 00:26:18.012 "name": "key0", 00:26:18.012 "path": "/tmp/tmp.awhQqek6oU", 00:26:18.012 "method": "keyring_file_add_key", 00:26:18.012 "req_id": 1 00:26:18.012 } 00:26:18.012 Got JSON-RPC error response 00:26:18.012 response: 00:26:18.012 { 00:26:18.012 "code": -1, 00:26:18.012 "message": "Operation not permitted" 00:26:18.012 } 00:26:18.012 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:18.273 [2024-10-07 14:36:41.828273] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:18.274 [2024-10-07 14:36:41.828308] bdev_nvme.c:6412:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:26:18.274 request: 00:26:18.274 { 00:26:18.274 "name": "TLSTEST", 00:26:18.274 "trtype": "tcp", 00:26:18.274 "traddr": "10.0.0.2", 00:26:18.274 "adrfam": "ipv4", 00:26:18.274 "trsvcid": "4420", 00:26:18.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:18.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:18.274 "prchk_reftag": false, 00:26:18.274 "prchk_guard": false, 00:26:18.274 "hdgst": false, 00:26:18.274 "ddgst": false, 00:26:18.274 "psk": "key0", 00:26:18.274 "allow_unrecognized_csi": false, 00:26:18.274 "method": "bdev_nvme_attach_controller", 00:26:18.274 "req_id": 1 00:26:18.274 } 00:26:18.274 Got JSON-RPC error response 00:26:18.274 response: 00:26:18.274 { 00:26:18.274 "code": -126, 00:26:18.274 "message": "Required key not available" 00:26:18.274 } 00:26:18.274 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3070239 00:26:18.274 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3070239 ']' 00:26:18.274 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3070239 00:26:18.274 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:18.274 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:18.274 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3070239 00:26:18.274 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:18.274 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:18.274 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3070239' 00:26:18.274 killing process with pid 3070239 00:26:18.274 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3070239 00:26:18.274 Received shutdown signal, test time was about 10.000000 seconds 00:26:18.274 00:26:18.274 Latency(us) 00:26:18.274 [2024-10-07T12:36:41.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.274 [2024-10-07T12:36:41.983Z] =================================================================================================================== 00:26:18.274 [2024-10-07T12:36:41.983Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:18.274 14:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3070239 00:26:18.845 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:26:18.845 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:26:18.845 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:18.845 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:18.845 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:18.845 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3067418 00:26:18.845 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3067418 ']' 00:26:18.845 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3067418 00:26:18.845 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:18.845 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:18.845 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3067418 00:26:18.845 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:18.845 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:18.845 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3067418' 00:26:18.845 killing process with pid 3067418 00:26:18.845 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3067418 00:26:18.845 14:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3067418 00:26:19.788 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:26:19.788 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:19.788 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:19.788 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:19.788 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3070615 00:26:19.788 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3070615 00:26:19.788 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:19.788 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3070615 ']' 00:26:19.788 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.788 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:19.788 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.788 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:19.788 14:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:19.788 [2024-10-07 14:36:43.308237] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:26:19.788 [2024-10-07 14:36:43.308359] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.788 [2024-10-07 14:36:43.456331] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.049 [2024-10-07 14:36:43.601348] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.049 [2024-10-07 14:36:43.601392] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.049 [2024-10-07 14:36:43.601400] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.049 [2024-10-07 14:36:43.601409] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.049 [2024-10-07 14:36:43.601416] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.049 [2024-10-07 14:36:43.602324] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.621 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:20.622 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:20.622 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:20.622 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:20.622 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:20.622 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.622 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.awhQqek6oU 00:26:20.622 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:26:20.622 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.awhQqek6oU 00:26:20.622 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:26:20.622 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:20.622 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:26:20.622 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:20.622 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.awhQqek6oU 00:26:20.622 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.awhQqek6oU 00:26:20.622 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:20.622 [2024-10-07 14:36:44.244170] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.622 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:20.883 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:20.883 [2024-10-07 14:36:44.544911] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:20.883 [2024-10-07 14:36:44.545155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.883 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:21.143 malloc0 00:26:21.143 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:21.404 14:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.awhQqek6oU 00:26:21.404 [2024-10-07 14:36:45.059440] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.awhQqek6oU': 0100666 00:26:21.404 [2024-10-07 14:36:45.059471] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:21.404 request: 00:26:21.404 { 00:26:21.404 "name": "key0", 00:26:21.404 "path": "/tmp/tmp.awhQqek6oU", 00:26:21.404 "method": "keyring_file_add_key", 00:26:21.404 "req_id": 1 00:26:21.404 } 00:26:21.404 Got JSON-RPC error response 00:26:21.404 response: 00:26:21.404 { 00:26:21.404 "code": -1, 00:26:21.404 "message": "Operation not permitted" 00:26:21.404 } 00:26:21.404 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:26:21.665 [2024-10-07 14:36:45.227911] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:26:21.665 [2024-10-07 14:36:45.227950] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:26:21.665 request: 00:26:21.665 { 00:26:21.665 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:21.665 "host": "nqn.2016-06.io.spdk:host1", 00:26:21.665 "psk": "key0", 00:26:21.665 "method": "nvmf_subsystem_add_host", 00:26:21.665 "req_id": 1 00:26:21.665 } 00:26:21.665 Got JSON-RPC error response 00:26:21.665 response: 00:26:21.665 { 00:26:21.665 "code": -32603, 00:26:21.665 "message": "Internal error" 00:26:21.665 } 00:26:21.665 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:26:21.665 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:21.665 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:21.665 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:21.665 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3070615 00:26:21.665 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3070615 ']' 00:26:21.665 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3070615 00:26:21.665 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:21.665 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:21.665 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3070615 00:26:21.665 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:21.665 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:21.665 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3070615' 00:26:21.665 killing process with pid 3070615 00:26:21.665 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3070615 00:26:21.665 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3070615 00:26:22.606 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.awhQqek6oU 00:26:22.606 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:26:22.606 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:22.606 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:22.606 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:22.606 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3071311 00:26:22.606 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3071311 00:26:22.606 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:22.606 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3071311 ']' 00:26:22.606 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.606 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:22.606 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.606 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:22.606 14:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:22.606 [2024-10-07 14:36:46.085944] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:26:22.606 [2024-10-07 14:36:46.086067] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.606 [2024-10-07 14:36:46.233141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.866 [2024-10-07 14:36:46.377130] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.866 [2024-10-07 14:36:46.377173] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.866 [2024-10-07 14:36:46.377181] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.866 [2024-10-07 14:36:46.377189] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.866 [2024-10-07 14:36:46.377196] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.866 [2024-10-07 14:36:46.378088] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.127 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:23.127 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:23.127 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:23.127 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:23.127 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:23.388 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.388 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.awhQqek6oU 00:26:23.388 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.awhQqek6oU 00:26:23.388 14:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:23.388 [2024-10-07 14:36:47.027978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:23.388 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:23.648 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:23.909 [2024-10-07 14:36:47.364837] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:23.909 [2024-10-07 14:36:47.365080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.909 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:23.909 malloc0 00:26:23.909 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:24.169 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.awhQqek6oU 00:26:24.430 14:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:26:24.430 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3071675 00:26:24.430 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:24.430 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:24.430 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3071675 /var/tmp/bdevperf.sock 00:26:24.430 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3071675 ']' 00:26:24.430 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:24.430 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:24.430 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:24.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:24.430 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:24.430 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:24.691 [2024-10-07 14:36:48.158516] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:26:24.691 [2024-10-07 14:36:48.158627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071675 ] 00:26:24.691 [2024-10-07 14:36:48.265393] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.951 [2024-10-07 14:36:48.400694] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:25.212 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:25.212 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:25.212 14:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.awhQqek6oU 00:26:25.472 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:25.733 [2024-10-07 14:36:49.213646] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:25.733 TLSTESTn1 00:26:25.733 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:26:25.994 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:26:25.994 "subsystems": [ 00:26:25.994 { 00:26:25.994 "subsystem": "keyring", 00:26:25.994 "config": [ 00:26:25.994 { 00:26:25.994 "method": "keyring_file_add_key", 00:26:25.994 "params": { 00:26:25.994 "name": "key0", 00:26:25.994 "path": "/tmp/tmp.awhQqek6oU" 00:26:25.994 } 00:26:25.994 } 00:26:25.994 ] 00:26:25.994 }, 00:26:25.994 { 00:26:25.994 "subsystem": "iobuf", 00:26:25.994 "config": [ 00:26:25.994 { 00:26:25.994 "method": "iobuf_set_options", 00:26:25.994 "params": { 00:26:25.994 "small_pool_count": 8192, 00:26:25.994 "large_pool_count": 1024, 00:26:25.994 "small_bufsize": 8192, 00:26:25.994 "large_bufsize": 135168 00:26:25.994 } 00:26:25.994 } 00:26:25.994 ] 00:26:25.994 }, 00:26:25.994 { 00:26:25.994 "subsystem": "sock", 00:26:25.994 "config": [ 00:26:25.994 { 00:26:25.994 "method": "sock_set_default_impl", 00:26:25.994 "params": { 00:26:25.994 "impl_name": "posix" 00:26:25.994 } 00:26:25.994 }, 00:26:25.994 { 00:26:25.994 "method": "sock_impl_set_options", 00:26:25.994 "params": { 00:26:25.994 "impl_name": "ssl", 00:26:25.994 "recv_buf_size": 4096, 00:26:25.994 "send_buf_size": 4096, 00:26:25.994 "enable_recv_pipe": true, 00:26:25.994 "enable_quickack": false, 00:26:25.994 "enable_placement_id": 0, 00:26:25.994 "enable_zerocopy_send_server": true, 00:26:25.994 "enable_zerocopy_send_client": false, 00:26:25.994 "zerocopy_threshold": 0, 00:26:25.994 "tls_version": 0, 00:26:25.994 "enable_ktls": false 00:26:25.994 } 00:26:25.994 }, 00:26:25.994 { 00:26:25.994 "method": "sock_impl_set_options", 00:26:25.994 "params": { 00:26:25.994 "impl_name": "posix", 00:26:25.994 "recv_buf_size": 2097152, 00:26:25.994 "send_buf_size": 2097152, 00:26:25.994 "enable_recv_pipe": true, 00:26:25.994 "enable_quickack": false, 00:26:25.994 "enable_placement_id": 0, 00:26:25.994 "enable_zerocopy_send_server": true, 00:26:25.994 "enable_zerocopy_send_client": false, 00:26:25.994 "zerocopy_threshold": 0, 00:26:25.994 "tls_version": 0, 00:26:25.994 "enable_ktls": false 00:26:25.994 } 00:26:25.994 } 00:26:25.994 ] 00:26:25.994 }, 00:26:25.994 { 00:26:25.994 "subsystem": "vmd", 00:26:25.994 "config": [] 00:26:25.994 }, 00:26:25.994 { 00:26:25.994 "subsystem": "accel", 00:26:25.994 "config": [ 00:26:25.994 { 00:26:25.994 "method": "accel_set_options", 00:26:25.994 "params": { 00:26:25.994 "small_cache_size": 128, 00:26:25.994 "large_cache_size": 16, 00:26:25.994 "task_count": 2048, 00:26:25.994 "sequence_count": 2048, 00:26:25.994 "buf_count": 2048 00:26:25.994 } 00:26:25.994 } 00:26:25.994 ] 00:26:25.994 }, 00:26:25.994 { 00:26:25.994 "subsystem": "bdev", 00:26:25.994 "config": [ 00:26:25.994 { 00:26:25.994 "method": "bdev_set_options", 00:26:25.994 "params": { 00:26:25.994 "bdev_io_pool_size": 65535, 00:26:25.994 "bdev_io_cache_size": 256, 00:26:25.994 "bdev_auto_examine": true, 00:26:25.994 "iobuf_small_cache_size": 128, 00:26:25.994 "iobuf_large_cache_size": 16 00:26:25.994 } 00:26:25.994 }, 00:26:25.994 { 00:26:25.994 "method": "bdev_raid_set_options", 00:26:25.994 "params": { 00:26:25.994 "process_window_size_kb": 1024, 00:26:25.994 "process_max_bandwidth_mb_sec": 0 00:26:25.994 } 00:26:25.994 }, 00:26:25.994 { 00:26:25.994 "method": "bdev_iscsi_set_options", 00:26:25.994 "params": { 00:26:25.994 "timeout_sec": 30 00:26:25.994 } 00:26:25.994 }, 00:26:25.994 { 00:26:25.994 "method": "bdev_nvme_set_options", 00:26:25.994 "params": { 00:26:25.994 "action_on_timeout": "none", 00:26:25.994 "timeout_us": 0, 00:26:25.994 "timeout_admin_us": 0, 00:26:25.994 "keep_alive_timeout_ms": 10000, 00:26:25.994 "arbitration_burst": 0, 00:26:25.994 "low_priority_weight": 0, 00:26:25.994 "medium_priority_weight": 0, 00:26:25.994 "high_priority_weight": 0, 00:26:25.994 "nvme_adminq_poll_period_us": 10000, 00:26:25.994 "nvme_ioq_poll_period_us": 0, 00:26:25.994 "io_queue_requests": 0, 00:26:25.994 "delay_cmd_submit": true, 00:26:25.994 "transport_retry_count": 4, 00:26:25.994 "bdev_retry_count": 3, 00:26:25.994 "transport_ack_timeout": 0, 00:26:25.994 "ctrlr_loss_timeout_sec": 0, 00:26:25.994 "reconnect_delay_sec": 0, 00:26:25.994 "fast_io_fail_timeout_sec": 0, 00:26:25.994 "disable_auto_failback": false, 00:26:25.994 "generate_uuids": false, 00:26:25.994 "transport_tos": 0, 00:26:25.994 "nvme_error_stat": false, 00:26:25.994 "rdma_srq_size": 0, 00:26:25.994 "io_path_stat": false, 00:26:25.994 "allow_accel_sequence": false, 00:26:25.995 "rdma_max_cq_size": 0, 00:26:25.995 "rdma_cm_event_timeout_ms": 0, 00:26:25.995 "dhchap_digests": [ 00:26:25.995 "sha256", 00:26:25.995 "sha384", 00:26:25.995 "sha512" 00:26:25.995 ], 00:26:25.995 "dhchap_dhgroups": [ 00:26:25.995 "null", 00:26:25.995 "ffdhe2048", 00:26:25.995 "ffdhe3072", 00:26:25.995 "ffdhe4096", 00:26:25.995 "ffdhe6144", 00:26:25.995 "ffdhe8192" 00:26:25.995 ] 00:26:25.995 } 00:26:25.995 }, 00:26:25.995 { 00:26:25.995 "method": "bdev_nvme_set_hotplug", 00:26:25.995 "params": { 00:26:25.995 "period_us": 100000, 00:26:25.995 "enable": false 00:26:25.995 } 00:26:25.995 }, 00:26:25.995 { 00:26:25.995 "method": "bdev_malloc_create", 00:26:25.995 "params": { 00:26:25.995 "name": "malloc0", 00:26:25.995 "num_blocks": 8192, 00:26:25.995 "block_size": 4096, 00:26:25.995 "physical_block_size": 4096, 00:26:25.995 "uuid": "ffbe41d4-2b18-4af6-be28-a348fd095f86", 00:26:25.995 "optimal_io_boundary": 0, 00:26:25.995 "md_size": 0, 00:26:25.995 "dif_type": 0, 00:26:25.995 "dif_is_head_of_md": false, 00:26:25.995 "dif_pi_format": 0 00:26:25.995 } 00:26:25.995 }, 00:26:25.995 { 00:26:25.995 "method": "bdev_wait_for_examine" 00:26:25.995 } 00:26:25.995 ] 00:26:25.995 }, 00:26:25.995 { 00:26:25.995 "subsystem": "nbd", 00:26:25.995 "config": [] 00:26:25.995 }, 00:26:25.995 { 00:26:25.995 "subsystem": "scheduler", 00:26:25.995 "config": [ 00:26:25.995 { 00:26:25.995 "method": "framework_set_scheduler", 00:26:25.995 "params": { 00:26:25.995 "name": "static" 00:26:25.995 } 00:26:25.995 } 00:26:25.995 ] 00:26:25.995 }, 00:26:25.995 { 00:26:25.995 "subsystem": "nvmf", 00:26:25.995 "config": [ 00:26:25.995 { 00:26:25.995 "method": "nvmf_set_config", 00:26:25.995 "params": { 00:26:25.995 "discovery_filter": "match_any", 00:26:25.995 "admin_cmd_passthru": { 00:26:25.995 "identify_ctrlr": false 00:26:25.995 }, 00:26:25.995 "dhchap_digests": [ 00:26:25.995 "sha256", 00:26:25.995 "sha384", 00:26:25.995 "sha512" 00:26:25.995 ], 00:26:25.995 "dhchap_dhgroups": [ 00:26:25.995 "null", 00:26:25.995 "ffdhe2048", 00:26:25.995 "ffdhe3072", 00:26:25.995 "ffdhe4096", 00:26:25.995 "ffdhe6144", 00:26:25.995 "ffdhe8192" 00:26:25.995 ] 00:26:25.995 } 00:26:25.995 }, 00:26:25.995 { 00:26:25.995 "method": "nvmf_set_max_subsystems", 00:26:25.995 "params": { 00:26:25.995 "max_subsystems": 1024 00:26:25.995 } 00:26:25.995 }, 00:26:25.995 { 00:26:25.995 "method": "nvmf_set_crdt", 00:26:25.995 "params": { 00:26:25.995 "crdt1": 0, 00:26:25.995 "crdt2": 0, 00:26:25.995 "crdt3": 0 00:26:25.995 } 00:26:25.995 }, 00:26:25.995 { 00:26:25.995 "method": "nvmf_create_transport", 00:26:25.995 "params": { 00:26:25.995 "trtype": "TCP", 00:26:25.995 "max_queue_depth": 128, 00:26:25.995 "max_io_qpairs_per_ctrlr": 127, 00:26:25.995 "in_capsule_data_size": 4096, 00:26:25.995 "max_io_size": 131072, 00:26:25.995 "io_unit_size": 131072, 00:26:25.995 "max_aq_depth": 128, 00:26:25.995 "num_shared_buffers": 511, 00:26:25.995 "buf_cache_size": 4294967295, 00:26:25.995 "dif_insert_or_strip": false, 00:26:25.995 "zcopy": false, 00:26:25.995 "c2h_success": false, 00:26:25.995 "sock_priority": 0, 00:26:25.995 "abort_timeout_sec": 1, 00:26:25.995 "ack_timeout": 0, 00:26:25.995 "data_wr_pool_size": 0 00:26:25.995 } 00:26:25.995 }, 00:26:25.995 { 00:26:25.995 "method": "nvmf_create_subsystem", 00:26:25.995 "params": { 00:26:25.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:25.995 "allow_any_host": false, 00:26:25.995 "serial_number": "SPDK00000000000001", 00:26:25.995 "model_number": "SPDK bdev Controller", 00:26:25.995 "max_namespaces": 10, 00:26:25.995 "min_cntlid": 1, 00:26:25.995 "max_cntlid": 65519, 00:26:25.995 "ana_reporting": false 00:26:25.995 } 00:26:25.995 }, 00:26:25.995 { 00:26:25.995 "method": "nvmf_subsystem_add_host", 00:26:25.995 "params": { 00:26:25.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:25.995 "host": "nqn.2016-06.io.spdk:host1", 00:26:25.995 "psk": "key0" 00:26:25.995 } 00:26:25.995 }, 00:26:25.995 { 00:26:25.995 "method": "nvmf_subsystem_add_ns", 00:26:25.995 "params": { 00:26:25.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:25.995 "namespace": { 00:26:25.995 "nsid": 1, 00:26:25.995 "bdev_name": "malloc0", 00:26:25.995 "nguid": "FFBE41D42B184AF6BE28A348FD095F86", 00:26:25.995 "uuid": "ffbe41d4-2b18-4af6-be28-a348fd095f86", 00:26:25.995 "no_auto_visible": false 00:26:25.995 } 00:26:25.995 } 00:26:25.995 }, 00:26:25.995 { 00:26:25.995 "method": "nvmf_subsystem_add_listener", 00:26:25.995 "params": { 00:26:25.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:25.995 "listen_address": { 00:26:25.995 "trtype": "TCP", 00:26:25.995 "adrfam": "IPv4", 00:26:25.995 "traddr": "10.0.0.2", 00:26:25.995 "trsvcid": "4420" 00:26:25.995 }, 00:26:25.995 "secure_channel": true 00:26:25.995 } 00:26:25.995 } 00:26:25.995 ] 00:26:25.995 } 00:26:25.995 ] 00:26:25.995 }' 00:26:25.995 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:26.256 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:26:26.256 "subsystems": [ 00:26:26.256 { 00:26:26.256 "subsystem": "keyring", 00:26:26.256 "config": [ 00:26:26.256 { 00:26:26.256 "method": "keyring_file_add_key", 00:26:26.256 "params": { 00:26:26.256 "name": "key0", 00:26:26.256 "path": "/tmp/tmp.awhQqek6oU" 00:26:26.256 } 00:26:26.256 } 00:26:26.256 ] 00:26:26.256 }, 00:26:26.256 { 00:26:26.256 "subsystem": "iobuf", 00:26:26.256 "config": [ 00:26:26.256 { 00:26:26.256 "method": "iobuf_set_options", 00:26:26.256 "params": { 00:26:26.256 "small_pool_count": 8192, 00:26:26.256 "large_pool_count": 1024, 00:26:26.256 "small_bufsize": 8192, 00:26:26.256 "large_bufsize": 135168 00:26:26.256 } 00:26:26.256 } 00:26:26.256 ] 00:26:26.256 }, 00:26:26.256 { 00:26:26.256 "subsystem": "sock", 00:26:26.256 "config": [ 00:26:26.256 { 00:26:26.256 "method": "sock_set_default_impl", 00:26:26.256 "params": { 00:26:26.256 "impl_name": "posix" 00:26:26.256 } 00:26:26.256 }, 00:26:26.256 { 00:26:26.256 "method": "sock_impl_set_options", 00:26:26.256 "params": { 00:26:26.256 "impl_name": "ssl", 00:26:26.256 "recv_buf_size": 4096, 00:26:26.256 "send_buf_size": 4096, 00:26:26.256 "enable_recv_pipe": true, 00:26:26.256 "enable_quickack": false, 00:26:26.256 "enable_placement_id": 0, 00:26:26.256 "enable_zerocopy_send_server": true, 00:26:26.256 "enable_zerocopy_send_client": false, 00:26:26.256 "zerocopy_threshold": 0, 00:26:26.256 "tls_version": 0, 00:26:26.256 "enable_ktls": false 00:26:26.256 } 00:26:26.256 }, 00:26:26.256 { 00:26:26.256 "method": "sock_impl_set_options", 00:26:26.256 "params": { 00:26:26.256 "impl_name": "posix", 00:26:26.256 "recv_buf_size": 2097152, 00:26:26.256 "send_buf_size": 2097152, 00:26:26.256 "enable_recv_pipe": true, 00:26:26.256 "enable_quickack": false, 00:26:26.256 "enable_placement_id": 0, 00:26:26.256 "enable_zerocopy_send_server": true, 00:26:26.256 "enable_zerocopy_send_client": false, 00:26:26.256 "zerocopy_threshold": 0, 00:26:26.256 "tls_version": 0, 00:26:26.256 "enable_ktls": false 00:26:26.256 } 00:26:26.256 } 00:26:26.256 ] 00:26:26.256 }, 00:26:26.256 { 00:26:26.256 "subsystem": "vmd", 00:26:26.256 "config": [] 00:26:26.256 }, 00:26:26.256 { 00:26:26.256 "subsystem": "accel", 00:26:26.256 "config": [ 00:26:26.256 { 00:26:26.256 "method": "accel_set_options", 00:26:26.256 "params": { 00:26:26.256 "small_cache_size": 128, 00:26:26.256 "large_cache_size": 16, 00:26:26.256 "task_count": 2048, 00:26:26.256 "sequence_count": 2048, 00:26:26.256 "buf_count": 2048 00:26:26.256 } 00:26:26.256 } 00:26:26.256 ] 00:26:26.256 }, 00:26:26.256 { 00:26:26.256 "subsystem": "bdev", 00:26:26.256 "config": [ 00:26:26.256 { 00:26:26.256 "method": "bdev_set_options", 00:26:26.256 "params": { 00:26:26.256 "bdev_io_pool_size": 65535, 00:26:26.256 "bdev_io_cache_size": 256, 00:26:26.257 "bdev_auto_examine": true, 00:26:26.257 "iobuf_small_cache_size": 128, 00:26:26.257 "iobuf_large_cache_size": 16 00:26:26.257 } 00:26:26.257 }, 00:26:26.257 { 00:26:26.257 "method": "bdev_raid_set_options", 00:26:26.257 "params": { 00:26:26.257 "process_window_size_kb": 1024, 00:26:26.257 "process_max_bandwidth_mb_sec": 0 00:26:26.257 } 00:26:26.257 }, 00:26:26.257 { 00:26:26.257 "method": "bdev_iscsi_set_options", 00:26:26.257 "params": { 00:26:26.257 "timeout_sec": 30 00:26:26.257 } 00:26:26.257 }, 00:26:26.257 { 00:26:26.257 "method": "bdev_nvme_set_options", 00:26:26.257 "params": { 00:26:26.257 "action_on_timeout": "none", 00:26:26.257 "timeout_us": 0, 00:26:26.257 "timeout_admin_us": 0, 00:26:26.257 "keep_alive_timeout_ms": 10000, 00:26:26.257 "arbitration_burst": 0, 00:26:26.257 "low_priority_weight": 0, 00:26:26.257 "medium_priority_weight": 0, 00:26:26.257 "high_priority_weight": 0, 00:26:26.257 "nvme_adminq_poll_period_us": 10000, 00:26:26.257 "nvme_ioq_poll_period_us": 0, 00:26:26.257 "io_queue_requests": 512, 00:26:26.257 "delay_cmd_submit": true, 00:26:26.257 "transport_retry_count": 4, 00:26:26.257 "bdev_retry_count": 3, 00:26:26.257 "transport_ack_timeout": 0, 00:26:26.257 "ctrlr_loss_timeout_sec": 0, 00:26:26.257 "reconnect_delay_sec": 0, 00:26:26.257 "fast_io_fail_timeout_sec": 0, 00:26:26.257 "disable_auto_failback": false, 00:26:26.257 "generate_uuids": false, 00:26:26.257 "transport_tos": 0, 00:26:26.257 "nvme_error_stat": false, 00:26:26.257 "rdma_srq_size": 0, 00:26:26.257 "io_path_stat": false, 00:26:26.257 "allow_accel_sequence": false, 00:26:26.257 "rdma_max_cq_size": 0, 00:26:26.257 "rdma_cm_event_timeout_ms": 0, 00:26:26.257 "dhchap_digests": [ 00:26:26.257 "sha256", 00:26:26.257 "sha384", 00:26:26.257 "sha512" 00:26:26.257 ], 00:26:26.257 "dhchap_dhgroups": [ 00:26:26.257 "null", 00:26:26.257 "ffdhe2048", 00:26:26.257 "ffdhe3072", 00:26:26.257 "ffdhe4096", 00:26:26.257 "ffdhe6144", 00:26:26.257 "ffdhe8192" 00:26:26.257 ] 00:26:26.257 } 00:26:26.257 }, 00:26:26.257 { 00:26:26.257 "method": "bdev_nvme_attach_controller", 00:26:26.257 "params": { 00:26:26.257 "name": "TLSTEST", 00:26:26.257 "trtype": "TCP", 00:26:26.257 "adrfam": "IPv4", 00:26:26.257 "traddr": "10.0.0.2", 00:26:26.257 "trsvcid": "4420", 00:26:26.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:26.257 "prchk_reftag": false, 00:26:26.257 "prchk_guard": false, 00:26:26.257 "ctrlr_loss_timeout_sec": 0, 00:26:26.257 "reconnect_delay_sec": 0, 00:26:26.257 "fast_io_fail_timeout_sec": 0, 00:26:26.257 "psk": "key0", 00:26:26.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:26.257 "hdgst": false, 00:26:26.257 "ddgst": false 00:26:26.257 } 00:26:26.257 }, 00:26:26.257 { 00:26:26.257 "method": "bdev_nvme_set_hotplug", 00:26:26.257 "params": { 00:26:26.257 "period_us": 100000, 00:26:26.257 "enable": false 00:26:26.257 } 00:26:26.257 }, 00:26:26.257 { 00:26:26.257 "method": "bdev_wait_for_examine" 00:26:26.257 } 00:26:26.257 ] 00:26:26.257 }, 00:26:26.257 { 00:26:26.257 "subsystem": "nbd", 00:26:26.257 "config": [] 00:26:26.257 } 00:26:26.257 ] 00:26:26.257 }' 00:26:26.257 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3071675 00:26:26.257 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3071675 ']' 00:26:26.257 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3071675 00:26:26.257 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:26.257 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:26.257 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3071675 00:26:26.257 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:26.257 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:26.257 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3071675' 00:26:26.257 killing process with pid 3071675 00:26:26.257 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3071675 00:26:26.257 Received shutdown signal, test time was about 10.000000 seconds 00:26:26.257 00:26:26.257 Latency(us) 00:26:26.257 [2024-10-07T12:36:49.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.257 [2024-10-07T12:36:49.966Z] =================================================================================================================== 00:26:26.257 [2024-10-07T12:36:49.966Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:26.257 14:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3071675 00:26:26.829 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3071311 00:26:26.829 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3071311 ']' 00:26:26.829 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3071311 00:26:26.829 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:26.829 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:26.829 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3071311 00:26:26.829 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:26.829 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:26.829 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3071311' 00:26:26.829 killing process with pid 3071311 00:26:26.829 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3071311 00:26:26.829 14:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3071311 00:26:27.771 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:26:27.771 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:27.771 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:27.771 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:27.771 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:26:27.771 "subsystems": [ 00:26:27.771 { 00:26:27.771 "subsystem": "keyring", 00:26:27.771 "config": [ 00:26:27.771 { 00:26:27.771 "method": "keyring_file_add_key", 00:26:27.771 "params": { 00:26:27.771 "name": "key0", 00:26:27.771 "path": "/tmp/tmp.awhQqek6oU" 00:26:27.771 } 00:26:27.771 } 00:26:27.771 ] 00:26:27.771 }, 00:26:27.771 { 00:26:27.771 "subsystem": "iobuf", 00:26:27.771 "config": [ 00:26:27.771 { 00:26:27.771 "method": "iobuf_set_options", 00:26:27.771 "params": { 00:26:27.771 "small_pool_count": 8192, 00:26:27.771 "large_pool_count": 1024, 00:26:27.771 "small_bufsize": 8192, 00:26:27.771 "large_bufsize": 135168 00:26:27.771 } 00:26:27.771 } 00:26:27.771 ] 00:26:27.771 }, 00:26:27.771 { 00:26:27.771 "subsystem": "sock", 00:26:27.771 "config": [ 00:26:27.771 { 00:26:27.771 "method": "sock_set_default_impl", 00:26:27.771 "params": { 00:26:27.771 "impl_name": "posix" 00:26:27.771 } 00:26:27.771 }, 00:26:27.771 { 00:26:27.771 "method": "sock_impl_set_options", 00:26:27.771 "params": { 00:26:27.771 "impl_name": "ssl", 00:26:27.771 "recv_buf_size": 4096, 00:26:27.771 "send_buf_size": 4096, 00:26:27.771 "enable_recv_pipe": true, 00:26:27.771 "enable_quickack": false, 00:26:27.771 "enable_placement_id": 0, 00:26:27.771 "enable_zerocopy_send_server": true, 00:26:27.771 "enable_zerocopy_send_client": false, 00:26:27.771 "zerocopy_threshold": 0, 00:26:27.771 "tls_version": 0, 00:26:27.771 "enable_ktls": false 00:26:27.771 } 00:26:27.771 }, 00:26:27.771 { 00:26:27.771 "method": "sock_impl_set_options", 00:26:27.771 "params": { 00:26:27.771 "impl_name": "posix", 00:26:27.771 "recv_buf_size": 2097152, 00:26:27.771 "send_buf_size": 2097152, 00:26:27.771 "enable_recv_pipe": true, 00:26:27.771 "enable_quickack": false, 00:26:27.771 "enable_placement_id": 0, 00:26:27.771 "enable_zerocopy_send_server": true, 00:26:27.771 "enable_zerocopy_send_client": false, 00:26:27.771 "zerocopy_threshold": 0, 00:26:27.771 "tls_version": 0, 00:26:27.771 "enable_ktls": false 00:26:27.771 } 00:26:27.771 } 00:26:27.771 ] 00:26:27.771 }, 00:26:27.771 { 00:26:27.771 "subsystem": "vmd", 00:26:27.771 "config": [] 00:26:27.771 }, 00:26:27.771 { 00:26:27.771 "subsystem": "accel", 00:26:27.771 "config": [ 00:26:27.771 { 00:26:27.772 "method": "accel_set_options", 00:26:27.772 "params": { 00:26:27.772 "small_cache_size": 128, 00:26:27.772 "large_cache_size": 16, 00:26:27.772 "task_count": 2048, 00:26:27.772 "sequence_count": 2048, 00:26:27.772 "buf_count": 2048 00:26:27.772 } 00:26:27.772 } 00:26:27.772 ] 00:26:27.772 }, 00:26:27.772 { 00:26:27.772 "subsystem": "bdev", 00:26:27.772 "config": [ 00:26:27.772 { 00:26:27.772 "method": "bdev_set_options", 00:26:27.772 "params": { 00:26:27.772 "bdev_io_pool_size": 65535, 00:26:27.772 "bdev_io_cache_size": 256, 00:26:27.772 "bdev_auto_examine": true, 00:26:27.772 "iobuf_small_cache_size": 128, 00:26:27.772 "iobuf_large_cache_size": 16 00:26:27.772 } 00:26:27.772 }, 00:26:27.772 { 00:26:27.772 "method": "bdev_raid_set_options", 00:26:27.772 "params": { 00:26:27.772 "process_window_size_kb": 1024, 00:26:27.772 "process_max_bandwidth_mb_sec": 0 00:26:27.772 } 00:26:27.772 }, 00:26:27.772 { 00:26:27.772 "method": "bdev_iscsi_set_options", 00:26:27.772 "params": { 00:26:27.772 "timeout_sec": 30 00:26:27.772 } 00:26:27.772 }, 00:26:27.772 { 00:26:27.772 "method": "bdev_nvme_set_options", 00:26:27.772 "params": { 00:26:27.772 "action_on_timeout": "none", 00:26:27.772 "timeout_us": 0, 00:26:27.772 "timeout_admin_us": 0, 00:26:27.772 "keep_alive_timeout_ms": 10000, 00:26:27.772 "arbitration_burst": 0, 00:26:27.772 "low_priority_weight": 0, 00:26:27.772 "medium_priority_weight": 0, 00:26:27.772 "high_priority_weight": 0, 00:26:27.772 "nvme_adminq_poll_period_us": 10000, 00:26:27.772 "nvme_ioq_poll_period_us": 0, 00:26:27.772 "io_queue_requests": 0, 00:26:27.772 "delay_cmd_submit": true, 00:26:27.772 "transport_retry_count": 4, 00:26:27.772 "bdev_retry_count": 3, 00:26:27.772 "transport_ack_timeout": 0, 00:26:27.772 "ctrlr_loss_timeout_sec": 0, 00:26:27.772 "reconnect_delay_sec": 0, 00:26:27.772 "fast_io_fail_timeout_sec": 0, 00:26:27.772 "disable_auto_failback": false, 00:26:27.772 "generate_uuids": false, 00:26:27.772 "transport_tos": 0, 00:26:27.772 "nvme_error_stat": false, 00:26:27.772 "rdma_srq_size": 0, 00:26:27.772 "io_path_stat": false, 00:26:27.772 "allow_accel_sequence": false, 00:26:27.772 "rdma_max_cq_size": 0, 00:26:27.772 "rdma_cm_event_timeout_ms": 0, 00:26:27.772 "dhchap_digests": [ 00:26:27.772 "sha256", 00:26:27.772 "sha384", 00:26:27.772 "sha512" 00:26:27.772 ], 00:26:27.772 "dhchap_dhgroups": [ 00:26:27.772 "null", 00:26:27.772 "ffdhe2048", 00:26:27.772 "ffdhe3072", 00:26:27.772 "ffdhe4096", 00:26:27.772 "ffdhe6144", 00:26:27.772 "ffdhe8192" 00:26:27.772 ] 00:26:27.772 } 00:26:27.772 }, 00:26:27.772 { 00:26:27.772 "method": "bdev_nvme_set_hotplug", 00:26:27.772 "params": { 00:26:27.772 "period_us": 100000, 00:26:27.772 "enable": false 00:26:27.772 } 00:26:27.772 }, 00:26:27.772 { 00:26:27.772 "method": "bdev_malloc_create", 00:26:27.772 "params": { 00:26:27.772 "name": "malloc0", 00:26:27.772 "num_blocks": 8192, 00:26:27.772 "block_size": 4096, 00:26:27.772 "physical_block_size": 4096, 00:26:27.772 "uuid": "ffbe41d4-2b18-4af6-be28-a348fd095f86", 00:26:27.772 "optimal_io_boundary": 0, 00:26:27.772 "md_size": 0, 00:26:27.772 "dif_type": 0, 00:26:27.772 "dif_is_head_of_md": false, 00:26:27.772 "dif_pi_format": 0 00:26:27.772 } 00:26:27.772 }, 00:26:27.772 { 00:26:27.772 "method": "bdev_wait_for_examine" 00:26:27.772 } 00:26:27.772 ] 00:26:27.772 }, 00:26:27.772 { 00:26:27.772 "subsystem": "nbd", 00:26:27.772 "config": [] 00:26:27.772 }, 00:26:27.772 { 00:26:27.772 "subsystem": "scheduler", 00:26:27.772 "config": [ 00:26:27.772 { 00:26:27.772 "method": "framework_set_scheduler", 00:26:27.772 "params": { 00:26:27.772 "name": "static" 00:26:27.772 } 00:26:27.772 } 00:26:27.772 ] 00:26:27.772 }, 00:26:27.772 { 00:26:27.772 "subsystem": "nvmf", 00:26:27.772 "config": [ 00:26:27.772 { 00:26:27.772 "method": "nvmf_set_config", 00:26:27.772 "params": { 00:26:27.772 "discovery_filter": "match_any", 00:26:27.772 "admin_cmd_passthru": { 00:26:27.772 "identify_ctrlr": false 00:26:27.772 }, 00:26:27.772 "dhchap_digests": [ 00:26:27.772 "sha256", 00:26:27.772 "sha384", 00:26:27.772 "sha512" 00:26:27.772 ], 00:26:27.772 "dhchap_dhgroups": [ 00:26:27.772 "null", 00:26:27.772 "ffdhe2048", 00:26:27.772 "ffdhe3072", 00:26:27.772 "ffdhe4096", 00:26:27.772 "ffdhe6144", 00:26:27.772 "ffdhe8192" 00:26:27.772 ] 00:26:27.772 } 00:26:27.772 }, 00:26:27.772 { 00:26:27.772 "method": "nvmf_set_max_subsystems", 00:26:27.772 "params": { 00:26:27.772 "max_subsystems": 1024 00:26:27.772 } 00:26:27.772 }, 00:26:27.772 { 00:26:27.772 "method": "nvmf_set_crdt", 00:26:27.772 "params": { 00:26:27.772 "crdt1": 0, 00:26:27.772 "crdt2": 0, 00:26:27.772 "crdt3": 0 00:26:27.772 } 00:26:27.772 }, 00:26:27.772 { 00:26:27.772 "method": "nvmf_create_transport", 00:26:27.772 "params": { 00:26:27.772 "trtype": "TCP", 00:26:27.772 "max_queue_depth": 128, 00:26:27.772 "max_io_qpairs_per_ctrlr": 127, 00:26:27.772 "in_capsule_data_size": 4096, 00:26:27.772 "max_io_size": 131072, 00:26:27.772 "io_unit_size": 131072, 00:26:27.772 "max_aq_depth": 128, 00:26:27.772 "num_shared_buffers": 511, 00:26:27.772 "buf_cache_size": 4294967295, 00:26:27.772 "dif_insert_or_strip": false, 00:26:27.772 "zcopy": false, 00:26:27.772 "c2h_success": false, 00:26:27.772 "sock_priority": 0, 00:26:27.772 "abort_timeout_sec": 1, 00:26:27.772 "ack_timeout": 0, 00:26:27.772 "data_wr_pool_size": 0 00:26:27.772 } 00:26:27.772 }, 00:26:27.772 { 00:26:27.772 "method": "nvmf_create_subsystem", 00:26:27.772 "params": { 00:26:27.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.772 "allow_any_host": false, 00:26:27.772 "serial_number": "SPDK00000000000001", 00:26:27.772 "model_number": "SPDK bdev Controller", 00:26:27.772 "max_namespaces": 10, 00:26:27.772 "min_cntlid": 1, 00:26:27.772 "max_cntlid": 65519, 00:26:27.772 "ana_reporting": false 00:26:27.772 } 00:26:27.772 }, 00:26:27.772 { 00:26:27.772 "method": "nvmf_subsystem_add_host", 00:26:27.772 "params": { 00:26:27.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.772 "host": "nqn.2016-06.io.spdk:host1", 00:26:27.772 "psk": "key0" 00:26:27.772 } 00:26:27.772 }, 00:26:27.772 { 00:26:27.772 "method": "nvmf_subsystem_add_ns", 00:26:27.772 "params": { 00:26:27.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.772 "namespace": { 00:26:27.772 "nsid": 1, 00:26:27.772 "bdev_name": "malloc0", 00:26:27.772 "nguid": "FFBE41D42B184AF6BE28A348FD095F86", 00:26:27.772 "uuid": "ffbe41d4-2b18-4af6-be28-a348fd095f86", 00:26:27.772 "no_auto_visible": false 00:26:27.772 } 00:26:27.772 } 00:26:27.772 }, 00:26:27.772 { 00:26:27.772 "method": "nvmf_subsystem_add_listener", 00:26:27.772 "params": { 00:26:27.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.772 "listen_address": { 00:26:27.772 "trtype": "TCP", 00:26:27.772 "adrfam": "IPv4", 00:26:27.772 "traddr": "10.0.0.2", 00:26:27.772 "trsvcid": "4420" 00:26:27.772 }, 00:26:27.772 "secure_channel": true 00:26:27.772 } 00:26:27.772 } 00:26:27.772 ] 00:26:27.772 } 00:26:27.772 ] 00:26:27.772 }' 00:26:27.772 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3072375 00:26:27.772 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3072375 00:26:27.772 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:26:27.772 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3072375 ']' 00:26:27.772 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.772 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:27.772 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.772 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:27.772 14:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:27.773 [2024-10-07 14:36:51.264711] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:26:27.773 [2024-10-07 14:36:51.264827] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.773 [2024-10-07 14:36:51.408299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.033 [2024-10-07 14:36:51.553042] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.033 [2024-10-07 14:36:51.553083] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.033 [2024-10-07 14:36:51.553091] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.033 [2024-10-07 14:36:51.553102] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.033 [2024-10-07 14:36:51.553109] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.033 [2024-10-07 14:36:51.554044] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.293 [2024-10-07 14:36:51.909349] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.293 [2024-10-07 14:36:51.941374] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:28.293 [2024-10-07 14:36:51.941625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.553 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:28.553 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:28.553 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:28.553 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:28.553 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:28.553 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.553 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3072403 00:26:28.553 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3072403 /var/tmp/bdevperf.sock 00:26:28.553 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3072403 ']' 00:26:28.553 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:28.553 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:28.553 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:28.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:28.553 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:26:28.553 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:28.553 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:28.553 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:26:28.553 "subsystems": [ 00:26:28.553 { 00:26:28.553 "subsystem": "keyring", 00:26:28.554 "config": [ 00:26:28.554 { 00:26:28.554 "method": "keyring_file_add_key", 00:26:28.554 "params": { 00:26:28.554 "name": "key0", 00:26:28.554 "path": "/tmp/tmp.awhQqek6oU" 00:26:28.554 } 00:26:28.554 } 00:26:28.554 ] 00:26:28.554 }, 00:26:28.554 { 00:26:28.554 "subsystem": "iobuf", 00:26:28.554 "config": [ 00:26:28.554 { 00:26:28.554 "method": "iobuf_set_options", 00:26:28.554 "params": { 00:26:28.554 "small_pool_count": 8192, 00:26:28.554 "large_pool_count": 1024, 00:26:28.554 "small_bufsize": 8192, 00:26:28.554 "large_bufsize": 135168 00:26:28.554 } 00:26:28.554 } 00:26:28.554 ] 00:26:28.554 }, 00:26:28.554 { 00:26:28.554 "subsystem": "sock", 00:26:28.554 "config": [ 00:26:28.554 { 00:26:28.554 "method": "sock_set_default_impl", 00:26:28.554 "params": { 00:26:28.554 "impl_name": "posix" 00:26:28.554 } 00:26:28.554 }, 00:26:28.554 { 00:26:28.554 "method": "sock_impl_set_options", 00:26:28.554 "params": { 00:26:28.554 "impl_name": "ssl", 00:26:28.554 "recv_buf_size": 4096, 00:26:28.554 "send_buf_size": 4096, 00:26:28.554 "enable_recv_pipe": true, 00:26:28.554 "enable_quickack": false, 00:26:28.554 "enable_placement_id": 0, 00:26:28.554 "enable_zerocopy_send_server": true, 00:26:28.554 "enable_zerocopy_send_client": false, 00:26:28.554 "zerocopy_threshold": 0, 00:26:28.554 "tls_version": 0, 00:26:28.554 "enable_ktls": false 00:26:28.554 } 00:26:28.554 }, 00:26:28.554 { 00:26:28.554 "method": "sock_impl_set_options", 00:26:28.554 "params": { 00:26:28.554 "impl_name": "posix", 00:26:28.554 "recv_buf_size": 2097152, 00:26:28.554 "send_buf_size": 2097152, 00:26:28.554 "enable_recv_pipe": true, 00:26:28.554 "enable_quickack": false, 00:26:28.554 "enable_placement_id": 0, 00:26:28.554 "enable_zerocopy_send_server": true, 00:26:28.554 "enable_zerocopy_send_client": false, 00:26:28.554 "zerocopy_threshold": 0, 00:26:28.554 "tls_version": 0, 00:26:28.554 "enable_ktls": false 00:26:28.554 } 00:26:28.554 } 00:26:28.554 ] 00:26:28.554 }, 00:26:28.554 { 00:26:28.554 "subsystem": "vmd", 00:26:28.554 "config": [] 00:26:28.554 }, 00:26:28.554 { 00:26:28.554 "subsystem": "accel", 00:26:28.554 "config": [ 00:26:28.554 { 00:26:28.554 "method": "accel_set_options", 00:26:28.554 "params": { 00:26:28.554 "small_cache_size": 128, 00:26:28.554 "large_cache_size": 16, 00:26:28.554 "task_count": 2048, 00:26:28.554 "sequence_count": 2048, 00:26:28.554 "buf_count": 2048 00:26:28.554 } 00:26:28.554 } 00:26:28.554 ] 00:26:28.554 }, 00:26:28.554 { 00:26:28.554 "subsystem": "bdev", 00:26:28.554 "config": [ 00:26:28.554 { 00:26:28.554 "method": "bdev_set_options", 00:26:28.554 "params": { 00:26:28.554 "bdev_io_pool_size": 65535, 00:26:28.554 "bdev_io_cache_size": 256, 00:26:28.554 "bdev_auto_examine": true, 00:26:28.554 "iobuf_small_cache_size": 128, 00:26:28.554 "iobuf_large_cache_size": 16 00:26:28.554 } 00:26:28.554 }, 00:26:28.554 { 00:26:28.554 "method": "bdev_raid_set_options", 00:26:28.554 "params": { 00:26:28.554 "process_window_size_kb": 1024, 00:26:28.554 "process_max_bandwidth_mb_sec": 0 00:26:28.554 } 00:26:28.554 }, 00:26:28.554 { 00:26:28.554 "method": "bdev_iscsi_set_options", 00:26:28.554 "params": { 00:26:28.554 "timeout_sec": 30 00:26:28.554 } 00:26:28.554 }, 00:26:28.554 { 00:26:28.554 "method": "bdev_nvme_set_options", 00:26:28.554 "params": { 00:26:28.554 "action_on_timeout": "none", 00:26:28.554 "timeout_us": 0, 00:26:28.554 "timeout_admin_us": 0, 00:26:28.554 "keep_alive_timeout_ms": 10000, 00:26:28.554 "arbitration_burst": 0, 00:26:28.554 "low_priority_weight": 0, 00:26:28.554 "medium_priority_weight": 0, 00:26:28.554 "high_priority_weight": 0, 00:26:28.554 "nvme_adminq_poll_period_us": 10000, 00:26:28.554 "nvme_ioq_poll_period_us": 0, 00:26:28.554 "io_queue_requests": 512, 00:26:28.554 "delay_cmd_submit": true, 00:26:28.554 "transport_retry_count": 4, 00:26:28.554 "bdev_retry_count": 3, 00:26:28.554 "transport_ack_timeout": 0, 00:26:28.554 "ctrlr_loss_timeout_sec": 0, 00:26:28.554 "reconnect_delay_sec": 0, 00:26:28.554 "fast_io_fail_timeout_sec": 0, 00:26:28.554 "disable_auto_failback": false, 00:26:28.554 "generate_uuids": false, 00:26:28.554 "transport_tos": 0, 00:26:28.554 "nvme_error_stat": false, 00:26:28.554 "rdma_srq_size": 0, 00:26:28.554 "io_path_stat": false, 00:26:28.554 "allow_accel_sequence": false, 00:26:28.554 "rdma_max_cq_size": 0, 00:26:28.554 "rdma_cm_event_timeout_ms": 0, 00:26:28.554 "dhchap_digests": [ 00:26:28.554 "sha256", 00:26:28.554 "sha384", 00:26:28.554 "sha512" 00:26:28.554 ], 00:26:28.554 "dhchap_dhgroups": [ 00:26:28.554 "null", 00:26:28.554 "ffdhe2048", 00:26:28.554 "ffdhe3072", 00:26:28.554 "ffdhe4096", 00:26:28.554 "ffdhe6144", 00:26:28.554 "ffdhe8192" 00:26:28.554 ] 00:26:28.554 } 00:26:28.554 }, 00:26:28.554 { 00:26:28.554 "method": "bdev_nvme_attach_controller", 00:26:28.554 "params": { 00:26:28.554 "name": "TLSTEST", 00:26:28.554 "trtype": "TCP", 00:26:28.554 "adrfam": "IPv4", 00:26:28.554 "traddr": "10.0.0.2", 00:26:28.554 "trsvcid": "4420", 00:26:28.554 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:28.554 "prchk_reftag": false, 00:26:28.554 "prchk_guard": false, 00:26:28.554 "ctrlr_loss_timeout_sec": 0, 00:26:28.554 "reconnect_delay_sec": 0, 00:26:28.554 "fast_io_fail_timeout_sec": 0, 00:26:28.554 "psk": "key0", 00:26:28.554 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:28.554 "hdgst": false, 00:26:28.554 "ddgst": false 00:26:28.554 } 00:26:28.554 }, 00:26:28.554 { 00:26:28.554 "method": "bdev_nvme_set_hotplug", 00:26:28.554 "params": { 00:26:28.554 "period_us": 100000, 00:26:28.554 "enable": false 00:26:28.554 } 00:26:28.554 }, 00:26:28.554 { 00:26:28.554 "method": "bdev_wait_for_examine" 00:26:28.554 } 00:26:28.554 ] 00:26:28.554 }, 00:26:28.554 { 00:26:28.554 "subsystem": "nbd", 00:26:28.554 "config": [] 00:26:28.554 } 00:26:28.554 ] 00:26:28.554 }' 00:26:28.554 [2024-10-07 14:36:52.140888] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:26:28.554 [2024-10-07 14:36:52.140985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072403 ] 00:26:28.554 [2024-10-07 14:36:52.254144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.815 [2024-10-07 14:36:52.390377] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.075 [2024-10-07 14:36:52.647735] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:29.335 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:29.335 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:29.336 14:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:26:29.336 Running I/O for 10 seconds... 00:26:31.662 4174.00 IOPS, 16.30 MiB/s [2024-10-07T12:36:56.313Z] 4672.50 IOPS, 18.25 MiB/s [2024-10-07T12:36:57.255Z] 4587.67 IOPS, 17.92 MiB/s [2024-10-07T12:36:58.196Z] 4636.00 IOPS, 18.11 MiB/s [2024-10-07T12:36:59.138Z] 4684.40 IOPS, 18.30 MiB/s [2024-10-07T12:37:00.081Z] 4738.00 IOPS, 18.51 MiB/s [2024-10-07T12:37:01.024Z] 4781.71 IOPS, 18.68 MiB/s [2024-10-07T12:37:02.408Z] 4784.00 IOPS, 18.69 MiB/s [2024-10-07T12:37:03.351Z] 4796.33 IOPS, 18.74 MiB/s [2024-10-07T12:37:03.351Z] 4763.10 IOPS, 18.61 MiB/s 00:26:39.642 Latency(us) 00:26:39.642 [2024-10-07T12:37:03.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.642 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:39.642 Verification LBA range: start 0x0 length 0x2000 00:26:39.642 TLSTESTn1 : 10.09 4732.79 18.49 0.00 0.00 26944.05 5051.73 107042.13 00:26:39.642 [2024-10-07T12:37:03.351Z] =================================================================================================================== 00:26:39.642 [2024-10-07T12:37:03.351Z] Total : 4732.79 18.49 0.00 0.00 26944.05 5051.73 107042.13 00:26:39.642 { 00:26:39.642 "results": [ 00:26:39.642 { 00:26:39.642 "job": "TLSTESTn1", 00:26:39.642 "core_mask": "0x4", 00:26:39.642 "workload": "verify", 00:26:39.642 "status": "finished", 00:26:39.642 "verify_range": { 00:26:39.642 "start": 0, 00:26:39.642 "length": 8192 00:26:39.642 }, 00:26:39.642 "queue_depth": 128, 00:26:39.642 "io_size": 4096, 00:26:39.642 "runtime": 10.091078, 00:26:39.642 "iops": 4732.794652860675, 00:26:39.642 "mibps": 18.487479112737013, 00:26:39.642 "io_failed": 0, 00:26:39.642 "io_timeout": 0, 00:26:39.642 "avg_latency_us": 26944.050154037282, 00:26:39.642 "min_latency_us": 5051.733333333334, 00:26:39.642 "max_latency_us": 107042.13333333333 00:26:39.642 } 00:26:39.642 ], 00:26:39.642 "core_count": 1 00:26:39.642 } 00:26:39.642 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:39.642 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3072403 00:26:39.642 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3072403 ']' 00:26:39.642 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3072403 00:26:39.642 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:39.642 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:39.642 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3072403 00:26:39.642 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:39.642 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:39.642 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3072403' 00:26:39.642 killing process with pid 3072403 00:26:39.642 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3072403 00:26:39.642 Received shutdown signal, test time was about 10.000000 seconds 00:26:39.642 00:26:39.642 Latency(us) 00:26:39.642 [2024-10-07T12:37:03.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.642 [2024-10-07T12:37:03.351Z] =================================================================================================================== 00:26:39.642 [2024-10-07T12:37:03.351Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:39.642 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3072403 00:26:40.214 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3072375 00:26:40.214 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3072375 ']' 00:26:40.214 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3072375 00:26:40.214 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:40.214 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:40.214 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3072375 00:26:40.214 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:40.214 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:40.214 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3072375' 00:26:40.214 killing process with pid 3072375 00:26:40.214 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3072375 00:26:40.214 14:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3072375 00:26:41.156 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:26:41.156 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:41.156 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:41.156 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:41.156 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3074813 00:26:41.156 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:41.156 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3074813 00:26:41.156 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3074813 ']' 00:26:41.156 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.156 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:41.156 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.156 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:41.156 14:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:41.157 [2024-10-07 14:37:04.601354] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:26:41.157 [2024-10-07 14:37:04.601468] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.157 [2024-10-07 14:37:04.731982] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.417 [2024-10-07 14:37:04.910338] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.417 [2024-10-07 14:37:04.910388] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.417 [2024-10-07 14:37:04.910400] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.417 [2024-10-07 14:37:04.910412] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.417 [2024-10-07 14:37:04.910421] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.417 [2024-10-07 14:37:04.911651] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.678 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:41.678 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:41.678 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:41.678 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:41.678 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:41.938 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.938 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.awhQqek6oU 00:26:41.938 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.awhQqek6oU 00:26:41.938 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:41.938 [2024-10-07 14:37:05.554103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.938 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:42.198 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:42.458 [2024-10-07 14:37:05.919046] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:42.458 [2024-10-07 14:37:05.919313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.458 14:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:42.458 malloc0 00:26:42.458 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:42.718 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.awhQqek6oU 00:26:42.979 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:26:43.240 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:43.240 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3075391 00:26:43.240 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:43.240 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3075391 /var/tmp/bdevperf.sock 00:26:43.240 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3075391 ']' 00:26:43.240 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:43.240 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:43.240 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:43.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:43.240 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:43.240 14:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:43.240 [2024-10-07 14:37:06.796599] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:26:43.240 [2024-10-07 14:37:06.796706] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3075391 ] 00:26:43.240 [2024-10-07 14:37:06.923291] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.500 [2024-10-07 14:37:07.059806] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.073 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:44.073 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:44.073 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.awhQqek6oU 00:26:44.073 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:44.334 [2024-10-07 14:37:07.841975] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:44.334 nvme0n1 00:26:44.334 14:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:44.334 Running I/O for 1 seconds... 00:26:45.718 3363.00 IOPS, 13.14 MiB/s 00:26:45.718 Latency(us) 00:26:45.718 [2024-10-07T12:37:09.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.718 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:45.718 Verification LBA range: start 0x0 length 0x2000 00:26:45.718 nvme0n1 : 1.06 3308.91 12.93 0.00 0.00 37785.57 9885.01 52865.71 00:26:45.718 [2024-10-07T12:37:09.427Z] =================================================================================================================== 00:26:45.718 [2024-10-07T12:37:09.427Z] Total : 3308.91 12.93 0.00 0.00 37785.57 9885.01 52865.71 00:26:45.718 { 00:26:45.718 "results": [ 00:26:45.718 { 00:26:45.718 "job": "nvme0n1", 00:26:45.718 "core_mask": "0x2", 00:26:45.718 "workload": "verify", 00:26:45.718 "status": "finished", 00:26:45.718 "verify_range": { 00:26:45.718 "start": 0, 00:26:45.718 "length": 8192 00:26:45.718 }, 00:26:45.718 "queue_depth": 128, 00:26:45.718 "io_size": 4096, 00:26:45.718 "runtime": 1.055332, 00:26:45.718 "iops": 3308.9113189024874, 00:26:45.718 "mibps": 12.925434839462842, 00:26:45.718 "io_failed": 0, 00:26:45.718 "io_timeout": 0, 00:26:45.718 "avg_latency_us": 37785.568720885836, 00:26:45.718 "min_latency_us": 9885.013333333334, 00:26:45.718 "max_latency_us": 52865.706666666665 00:26:45.718 } 00:26:45.718 ], 00:26:45.718 "core_count": 1 00:26:45.718 } 00:26:45.718 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3075391 00:26:45.718 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3075391 ']' 00:26:45.718 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3075391 00:26:45.718 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:45.718 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:45.718 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3075391 00:26:45.718 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:45.718 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:45.718 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3075391' 00:26:45.718 killing process with pid 3075391 00:26:45.718 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3075391 00:26:45.718 Received shutdown signal, test time was about 1.000000 seconds 00:26:45.718 00:26:45.718 Latency(us) 00:26:45.718 [2024-10-07T12:37:09.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.718 [2024-10-07T12:37:09.427Z] =================================================================================================================== 00:26:45.718 [2024-10-07T12:37:09.427Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:45.718 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3075391 00:26:45.980 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3074813 00:26:45.980 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3074813 ']' 00:26:45.980 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3074813 00:26:45.980 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:46.241 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:46.241 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3074813 00:26:46.241 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:46.241 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:46.241 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3074813' 00:26:46.241 killing process with pid 3074813 00:26:46.241 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3074813 00:26:46.241 14:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3074813 00:26:47.185 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:26:47.185 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:47.185 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:47.185 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:47.185 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3076131 00:26:47.185 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3076131 00:26:47.185 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:47.185 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3076131 ']' 00:26:47.185 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.185 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:47.185 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.185 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:47.185 14:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:47.185 [2024-10-07 14:37:10.806501] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:26:47.185 [2024-10-07 14:37:10.806608] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.446 [2024-10-07 14:37:10.939331] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.446 [2024-10-07 14:37:11.119019] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.446 [2024-10-07 14:37:11.119073] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.446 [2024-10-07 14:37:11.119085] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.446 [2024-10-07 14:37:11.119097] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.446 [2024-10-07 14:37:11.119106] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.446 [2024-10-07 14:37:11.120335] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:48.020 [2024-10-07 14:37:11.613910] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.020 malloc0 00:26:48.020 [2024-10-07 14:37:11.676394] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:48.020 [2024-10-07 14:37:11.676670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3076327 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3076327 /var/tmp/bdevperf.sock 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3076327 ']' 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:48.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:48.020 14:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:48.281 [2024-10-07 14:37:11.782544] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:26:48.281 [2024-10-07 14:37:11.782649] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076327 ] 00:26:48.281 [2024-10-07 14:37:11.907745] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.542 [2024-10-07 14:37:12.044286] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.113 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:49.113 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:49.113 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.awhQqek6oU 00:26:49.113 14:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:49.376 [2024-10-07 14:37:12.894126] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:49.376 nvme0n1 00:26:49.376 14:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:49.637 Running I/O for 1 seconds... 00:26:50.580 4201.00 IOPS, 16.41 MiB/s 00:26:50.580 Latency(us) 00:26:50.580 [2024-10-07T12:37:14.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.580 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:50.580 Verification LBA range: start 0x0 length 0x2000 00:26:50.580 nvme0n1 : 1.02 4243.71 16.58 0.00 0.00 29917.21 5215.57 40850.77 00:26:50.580 [2024-10-07T12:37:14.289Z] =================================================================================================================== 00:26:50.580 [2024-10-07T12:37:14.289Z] Total : 4243.71 16.58 0.00 0.00 29917.21 5215.57 40850.77 00:26:50.580 { 00:26:50.580 "results": [ 00:26:50.580 { 00:26:50.580 "job": "nvme0n1", 00:26:50.580 "core_mask": "0x2", 00:26:50.580 "workload": "verify", 00:26:50.580 "status": "finished", 00:26:50.580 "verify_range": { 00:26:50.580 "start": 0, 00:26:50.580 "length": 8192 00:26:50.580 }, 00:26:50.580 "queue_depth": 128, 00:26:50.580 "io_size": 4096, 00:26:50.580 "runtime": 1.020099, 00:26:50.580 "iops": 4243.70575797055, 00:26:50.580 "mibps": 16.57697561707246, 00:26:50.580 "io_failed": 0, 00:26:50.580 "io_timeout": 0, 00:26:50.580 "avg_latency_us": 29917.209862169857, 00:26:50.580 "min_latency_us": 5215.573333333334, 00:26:50.580 "max_latency_us": 40850.77333333333 00:26:50.580 } 00:26:50.580 ], 00:26:50.580 "core_count": 1 00:26:50.580 } 00:26:50.580 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:26:50.580 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.580 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:50.580 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.580 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:26:50.580 "subsystems": [ 00:26:50.580 { 00:26:50.580 "subsystem": "keyring", 00:26:50.580 "config": [ 00:26:50.580 { 00:26:50.580 "method": "keyring_file_add_key", 00:26:50.580 "params": { 00:26:50.580 "name": "key0", 00:26:50.580 "path": "/tmp/tmp.awhQqek6oU" 00:26:50.580 } 00:26:50.580 } 00:26:50.580 ] 00:26:50.580 }, 00:26:50.580 { 00:26:50.580 "subsystem": "iobuf", 00:26:50.580 "config": [ 00:26:50.580 { 00:26:50.580 "method": "iobuf_set_options", 00:26:50.580 "params": { 00:26:50.580 "small_pool_count": 8192, 00:26:50.580 "large_pool_count": 1024, 00:26:50.580 "small_bufsize": 8192, 00:26:50.580 "large_bufsize": 135168 00:26:50.580 } 00:26:50.580 } 00:26:50.580 ] 00:26:50.580 }, 00:26:50.580 { 00:26:50.580 "subsystem": "sock", 00:26:50.580 "config": [ 00:26:50.580 { 00:26:50.580 "method": "sock_set_default_impl", 00:26:50.580 "params": { 00:26:50.581 "impl_name": "posix" 00:26:50.581 } 00:26:50.581 }, 00:26:50.581 { 00:26:50.581 "method": "sock_impl_set_options", 00:26:50.581 "params": { 00:26:50.581 "impl_name": "ssl", 00:26:50.581 "recv_buf_size": 4096, 00:26:50.581 "send_buf_size": 4096, 00:26:50.581 "enable_recv_pipe": true, 00:26:50.581 "enable_quickack": false, 00:26:50.581 "enable_placement_id": 0, 00:26:50.581 "enable_zerocopy_send_server": true, 00:26:50.581 "enable_zerocopy_send_client": false, 00:26:50.581 "zerocopy_threshold": 0, 00:26:50.581 "tls_version": 0, 00:26:50.581 "enable_ktls": false 00:26:50.581 } 00:26:50.581 }, 00:26:50.581 { 00:26:50.581 "method": "sock_impl_set_options", 00:26:50.581 "params": { 00:26:50.581 "impl_name": "posix", 00:26:50.581 "recv_buf_size": 2097152, 00:26:50.581 "send_buf_size": 2097152, 00:26:50.581 "enable_recv_pipe": true, 00:26:50.581 "enable_quickack": false, 00:26:50.581 "enable_placement_id": 0, 00:26:50.581 "enable_zerocopy_send_server": true, 00:26:50.581 "enable_zerocopy_send_client": false, 00:26:50.581 "zerocopy_threshold": 0, 00:26:50.581 "tls_version": 0, 00:26:50.581 "enable_ktls": false 00:26:50.581 } 00:26:50.581 } 00:26:50.581 ] 00:26:50.581 }, 00:26:50.581 { 00:26:50.581 "subsystem": "vmd", 00:26:50.581 "config": [] 00:26:50.581 }, 00:26:50.581 { 00:26:50.581 "subsystem": "accel", 00:26:50.581 "config": [ 00:26:50.581 { 00:26:50.581 "method": "accel_set_options", 00:26:50.581 "params": { 00:26:50.581 "small_cache_size": 128, 00:26:50.581 "large_cache_size": 16, 00:26:50.581 "task_count": 2048, 00:26:50.581 "sequence_count": 2048, 00:26:50.581 "buf_count": 2048 00:26:50.581 } 00:26:50.581 } 00:26:50.581 ] 00:26:50.581 }, 00:26:50.581 { 00:26:50.581 "subsystem": "bdev", 00:26:50.581 "config": [ 00:26:50.581 { 00:26:50.581 "method": "bdev_set_options", 00:26:50.581 "params": { 00:26:50.581 "bdev_io_pool_size": 65535, 00:26:50.581 "bdev_io_cache_size": 256, 00:26:50.581 "bdev_auto_examine": true, 00:26:50.581 "iobuf_small_cache_size": 128, 00:26:50.581 "iobuf_large_cache_size": 16 00:26:50.581 } 00:26:50.581 }, 00:26:50.581 { 00:26:50.581 "method": "bdev_raid_set_options", 00:26:50.581 "params": { 00:26:50.581 "process_window_size_kb": 1024, 00:26:50.581 "process_max_bandwidth_mb_sec": 0 00:26:50.581 } 00:26:50.581 }, 00:26:50.581 { 00:26:50.581 "method": "bdev_iscsi_set_options", 00:26:50.581 "params": { 00:26:50.581 "timeout_sec": 30 00:26:50.581 } 00:26:50.581 }, 00:26:50.581 { 00:26:50.581 "method": "bdev_nvme_set_options", 00:26:50.581 "params": { 00:26:50.581 "action_on_timeout": "none", 00:26:50.581 "timeout_us": 0, 00:26:50.581 "timeout_admin_us": 0, 00:26:50.581 "keep_alive_timeout_ms": 10000, 00:26:50.581 "arbitration_burst": 0, 00:26:50.581 "low_priority_weight": 0, 00:26:50.581 "medium_priority_weight": 0, 00:26:50.581 "high_priority_weight": 0, 00:26:50.581 "nvme_adminq_poll_period_us": 10000, 00:26:50.581 "nvme_ioq_poll_period_us": 0, 00:26:50.581 "io_queue_requests": 0, 00:26:50.581 "delay_cmd_submit": true, 00:26:50.581 "transport_retry_count": 4, 00:26:50.581 "bdev_retry_count": 3, 00:26:50.581 "transport_ack_timeout": 0, 00:26:50.581 "ctrlr_loss_timeout_sec": 0, 00:26:50.581 "reconnect_delay_sec": 0, 00:26:50.581 "fast_io_fail_timeout_sec": 0, 00:26:50.581 "disable_auto_failback": false, 00:26:50.581 "generate_uuids": false, 00:26:50.581 "transport_tos": 0, 00:26:50.581 "nvme_error_stat": false, 00:26:50.581 "rdma_srq_size": 0, 00:26:50.581 "io_path_stat": false, 00:26:50.581 "allow_accel_sequence": false, 00:26:50.581 "rdma_max_cq_size": 0, 00:26:50.581 "rdma_cm_event_timeout_ms": 0, 00:26:50.581 "dhchap_digests": [ 00:26:50.581 "sha256", 00:26:50.581 "sha384", 00:26:50.581 "sha512" 00:26:50.581 ], 00:26:50.581 "dhchap_dhgroups": [ 00:26:50.581 "null", 00:26:50.581 "ffdhe2048", 00:26:50.581 "ffdhe3072", 00:26:50.581 "ffdhe4096", 00:26:50.581 "ffdhe6144", 00:26:50.581 "ffdhe8192" 00:26:50.581 ] 00:26:50.581 } 00:26:50.581 }, 00:26:50.581 { 00:26:50.581 "method": "bdev_nvme_set_hotplug", 00:26:50.581 "params": { 00:26:50.581 "period_us": 100000, 00:26:50.581 "enable": false 00:26:50.581 } 00:26:50.581 }, 00:26:50.581 { 00:26:50.581 "method": "bdev_malloc_create", 00:26:50.581 "params": { 00:26:50.581 "name": "malloc0", 00:26:50.581 "num_blocks": 8192, 00:26:50.581 "block_size": 4096, 00:26:50.581 "physical_block_size": 4096, 00:26:50.581 "uuid": "7ab888fa-0252-4abb-b3de-4f44640a25c9", 00:26:50.581 "optimal_io_boundary": 0, 00:26:50.581 "md_size": 0, 00:26:50.581 "dif_type": 0, 00:26:50.581 "dif_is_head_of_md": false, 00:26:50.581 "dif_pi_format": 0 00:26:50.581 } 00:26:50.581 }, 00:26:50.581 { 00:26:50.581 "method": "bdev_wait_for_examine" 00:26:50.581 } 00:26:50.581 ] 00:26:50.581 }, 00:26:50.581 { 00:26:50.581 "subsystem": "nbd", 00:26:50.581 "config": [] 00:26:50.581 }, 00:26:50.581 { 00:26:50.581 "subsystem": "scheduler", 00:26:50.581 "config": [ 00:26:50.581 { 00:26:50.581 "method": "framework_set_scheduler", 00:26:50.581 "params": { 00:26:50.581 "name": "static" 00:26:50.581 } 00:26:50.581 } 00:26:50.581 ] 00:26:50.581 }, 00:26:50.581 { 00:26:50.581 "subsystem": "nvmf", 00:26:50.581 "config": [ 00:26:50.581 { 00:26:50.581 "method": "nvmf_set_config", 00:26:50.581 "params": { 00:26:50.581 "discovery_filter": "match_any", 00:26:50.581 "admin_cmd_passthru": { 00:26:50.581 "identify_ctrlr": false 00:26:50.581 }, 00:26:50.581 "dhchap_digests": [ 00:26:50.581 "sha256", 00:26:50.581 "sha384", 00:26:50.581 "sha512" 00:26:50.581 ], 00:26:50.581 "dhchap_dhgroups": [ 00:26:50.581 "null", 00:26:50.581 "ffdhe2048", 00:26:50.581 "ffdhe3072", 00:26:50.581 "ffdhe4096", 00:26:50.581 "ffdhe6144", 00:26:50.581 "ffdhe8192" 00:26:50.581 ] 00:26:50.581 } 00:26:50.581 }, 00:26:50.581 { 00:26:50.581 "method": "nvmf_set_max_subsystems", 00:26:50.581 "params": { 00:26:50.581 "max_subsystems": 1024 00:26:50.581 } 00:26:50.581 }, 00:26:50.581 { 00:26:50.581 "method": "nvmf_set_crdt", 00:26:50.581 "params": { 00:26:50.581 "crdt1": 0, 00:26:50.581 "crdt2": 0, 00:26:50.581 "crdt3": 0 00:26:50.581 } 00:26:50.581 }, 00:26:50.581 { 00:26:50.581 "method": "nvmf_create_transport", 00:26:50.581 "params": { 00:26:50.581 "trtype": "TCP", 00:26:50.581 "max_queue_depth": 128, 00:26:50.581 "max_io_qpairs_per_ctrlr": 127, 00:26:50.581 "in_capsule_data_size": 4096, 00:26:50.582 "max_io_size": 131072, 00:26:50.582 "io_unit_size": 131072, 00:26:50.582 "max_aq_depth": 128, 00:26:50.582 "num_shared_buffers": 511, 00:26:50.582 "buf_cache_size": 4294967295, 00:26:50.582 "dif_insert_or_strip": false, 00:26:50.582 "zcopy": false, 00:26:50.582 "c2h_success": false, 00:26:50.582 "sock_priority": 0, 00:26:50.582 "abort_timeout_sec": 1, 00:26:50.582 "ack_timeout": 0, 00:26:50.582 "data_wr_pool_size": 0 00:26:50.582 } 00:26:50.582 }, 00:26:50.582 { 00:26:50.582 "method": "nvmf_create_subsystem", 00:26:50.582 "params": { 00:26:50.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:50.582 "allow_any_host": false, 00:26:50.582 "serial_number": "00000000000000000000", 00:26:50.582 "model_number": "SPDK bdev Controller", 00:26:50.582 "max_namespaces": 32, 00:26:50.582 "min_cntlid": 1, 00:26:50.582 "max_cntlid": 65519, 00:26:50.582 "ana_reporting": false 00:26:50.582 } 00:26:50.582 }, 00:26:50.582 { 00:26:50.582 "method": "nvmf_subsystem_add_host", 00:26:50.582 "params": { 00:26:50.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:50.582 "host": "nqn.2016-06.io.spdk:host1", 00:26:50.582 "psk": "key0" 00:26:50.582 } 00:26:50.582 }, 00:26:50.582 { 00:26:50.582 "method": "nvmf_subsystem_add_ns", 00:26:50.582 "params": { 00:26:50.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:50.582 "namespace": { 00:26:50.582 "nsid": 1, 00:26:50.582 "bdev_name": "malloc0", 00:26:50.582 "nguid": "7AB888FA02524ABBB3DE4F44640A25C9", 00:26:50.582 "uuid": "7ab888fa-0252-4abb-b3de-4f44640a25c9", 00:26:50.582 "no_auto_visible": false 00:26:50.582 } 00:26:50.582 } 00:26:50.582 }, 00:26:50.582 { 00:26:50.582 "method": "nvmf_subsystem_add_listener", 00:26:50.582 "params": { 00:26:50.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:50.582 "listen_address": { 00:26:50.582 "trtype": "TCP", 00:26:50.582 "adrfam": "IPv4", 00:26:50.582 "traddr": "10.0.0.2", 00:26:50.582 "trsvcid": "4420" 00:26:50.582 }, 00:26:50.582 "secure_channel": false, 00:26:50.582 "sock_impl": "ssl" 00:26:50.582 } 00:26:50.582 } 00:26:50.582 ] 00:26:50.582 } 00:26:50.582 ] 00:26:50.582 }' 00:26:50.582 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:50.844 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:26:50.844 "subsystems": [ 00:26:50.844 { 00:26:50.844 "subsystem": "keyring", 00:26:50.844 "config": [ 00:26:50.844 { 00:26:50.844 "method": "keyring_file_add_key", 00:26:50.844 "params": { 00:26:50.844 "name": "key0", 00:26:50.844 "path": "/tmp/tmp.awhQqek6oU" 00:26:50.844 } 00:26:50.844 } 00:26:50.844 ] 00:26:50.844 }, 00:26:50.844 { 00:26:50.844 "subsystem": "iobuf", 00:26:50.844 "config": [ 00:26:50.844 { 00:26:50.844 "method": "iobuf_set_options", 00:26:50.844 "params": { 00:26:50.844 "small_pool_count": 8192, 00:26:50.844 "large_pool_count": 1024, 00:26:50.844 "small_bufsize": 8192, 00:26:50.844 "large_bufsize": 135168 00:26:50.844 } 00:26:50.844 } 00:26:50.844 ] 00:26:50.844 }, 00:26:50.844 { 00:26:50.844 "subsystem": "sock", 00:26:50.844 "config": [ 00:26:50.844 { 00:26:50.844 "method": "sock_set_default_impl", 00:26:50.844 "params": { 00:26:50.844 "impl_name": "posix" 00:26:50.844 } 00:26:50.844 }, 00:26:50.844 { 00:26:50.844 "method": "sock_impl_set_options", 00:26:50.844 "params": { 00:26:50.844 "impl_name": "ssl", 00:26:50.844 "recv_buf_size": 4096, 00:26:50.844 "send_buf_size": 4096, 00:26:50.844 "enable_recv_pipe": true, 00:26:50.844 "enable_quickack": false, 00:26:50.844 "enable_placement_id": 0, 00:26:50.844 "enable_zerocopy_send_server": true, 00:26:50.844 "enable_zerocopy_send_client": false, 00:26:50.844 "zerocopy_threshold": 0, 00:26:50.844 "tls_version": 0, 00:26:50.844 "enable_ktls": false 00:26:50.844 } 00:26:50.844 }, 00:26:50.844 { 00:26:50.844 "method": "sock_impl_set_options", 00:26:50.844 "params": { 00:26:50.844 "impl_name": "posix", 00:26:50.844 "recv_buf_size": 2097152, 00:26:50.844 "send_buf_size": 2097152, 00:26:50.844 "enable_recv_pipe": true, 00:26:50.844 "enable_quickack": false, 00:26:50.844 "enable_placement_id": 0, 00:26:50.844 "enable_zerocopy_send_server": true, 00:26:50.844 "enable_zerocopy_send_client": false, 00:26:50.844 "zerocopy_threshold": 0, 00:26:50.844 "tls_version": 0, 00:26:50.844 "enable_ktls": false 00:26:50.844 } 00:26:50.844 } 00:26:50.844 ] 00:26:50.844 }, 00:26:50.844 { 00:26:50.844 "subsystem": "vmd", 00:26:50.844 "config": [] 00:26:50.844 }, 00:26:50.844 { 00:26:50.844 "subsystem": "accel", 00:26:50.844 "config": [ 00:26:50.844 { 00:26:50.844 "method": "accel_set_options", 00:26:50.844 "params": { 00:26:50.844 "small_cache_size": 128, 00:26:50.844 "large_cache_size": 16, 00:26:50.844 "task_count": 2048, 00:26:50.844 "sequence_count": 2048, 00:26:50.844 "buf_count": 2048 00:26:50.844 } 00:26:50.844 } 00:26:50.844 ] 00:26:50.844 }, 00:26:50.844 { 00:26:50.844 "subsystem": "bdev", 00:26:50.844 "config": [ 00:26:50.844 { 00:26:50.844 "method": "bdev_set_options", 00:26:50.844 "params": { 00:26:50.844 "bdev_io_pool_size": 65535, 00:26:50.844 "bdev_io_cache_size": 256, 00:26:50.844 "bdev_auto_examine": true, 00:26:50.844 "iobuf_small_cache_size": 128, 00:26:50.844 "iobuf_large_cache_size": 16 00:26:50.844 } 00:26:50.844 }, 00:26:50.844 { 00:26:50.844 "method": "bdev_raid_set_options", 00:26:50.844 "params": { 00:26:50.844 "process_window_size_kb": 1024, 00:26:50.844 "process_max_bandwidth_mb_sec": 0 00:26:50.844 } 00:26:50.844 }, 00:26:50.844 { 00:26:50.845 "method": "bdev_iscsi_set_options", 00:26:50.845 "params": { 00:26:50.845 "timeout_sec": 30 00:26:50.845 } 00:26:50.845 }, 00:26:50.845 { 00:26:50.845 "method": "bdev_nvme_set_options", 00:26:50.845 "params": { 00:26:50.845 "action_on_timeout": "none", 00:26:50.845 "timeout_us": 0, 00:26:50.845 "timeout_admin_us": 0, 00:26:50.845 "keep_alive_timeout_ms": 10000, 00:26:50.845 "arbitration_burst": 0, 00:26:50.845 "low_priority_weight": 0, 00:26:50.845 "medium_priority_weight": 0, 00:26:50.845 "high_priority_weight": 0, 00:26:50.845 "nvme_adminq_poll_period_us": 10000, 00:26:50.845 "nvme_ioq_poll_period_us": 0, 00:26:50.845 "io_queue_requests": 512, 00:26:50.845 "delay_cmd_submit": true, 00:26:50.845 "transport_retry_count": 4, 00:26:50.845 "bdev_retry_count": 3, 00:26:50.845 "transport_ack_timeout": 0, 00:26:50.845 "ctrlr_loss_timeout_sec": 0, 00:26:50.845 "reconnect_delay_sec": 0, 00:26:50.845 "fast_io_fail_timeout_sec": 0, 00:26:50.845 "disable_auto_failback": false, 00:26:50.845 "generate_uuids": false, 00:26:50.845 "transport_tos": 0, 00:26:50.845 "nvme_error_stat": false, 00:26:50.845 "rdma_srq_size": 0, 00:26:50.845 "io_path_stat": false, 00:26:50.845 "allow_accel_sequence": false, 00:26:50.845 "rdma_max_cq_size": 0, 00:26:50.845 "rdma_cm_event_timeout_ms": 0, 00:26:50.845 "dhchap_digests": [ 00:26:50.845 "sha256", 00:26:50.845 "sha384", 00:26:50.845 "sha512" 00:26:50.845 ], 00:26:50.845 "dhchap_dhgroups": [ 00:26:50.845 "null", 00:26:50.845 "ffdhe2048", 00:26:50.845 "ffdhe3072", 00:26:50.845 "ffdhe4096", 00:26:50.845 "ffdhe6144", 00:26:50.845 "ffdhe8192" 00:26:50.845 ] 00:26:50.845 } 00:26:50.845 }, 00:26:50.845 { 00:26:50.845 "method": "bdev_nvme_attach_controller", 00:26:50.845 "params": { 00:26:50.845 "name": "nvme0", 00:26:50.845 "trtype": "TCP", 00:26:50.845 "adrfam": "IPv4", 00:26:50.845 "traddr": "10.0.0.2", 00:26:50.845 "trsvcid": "4420", 00:26:50.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:50.845 "prchk_reftag": false, 00:26:50.845 "prchk_guard": false, 00:26:50.845 "ctrlr_loss_timeout_sec": 0, 00:26:50.845 "reconnect_delay_sec": 0, 00:26:50.845 "fast_io_fail_timeout_sec": 0, 00:26:50.845 "psk": "key0", 00:26:50.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:50.845 "hdgst": false, 00:26:50.845 "ddgst": false 00:26:50.845 } 00:26:50.845 }, 00:26:50.845 { 00:26:50.845 "method": "bdev_nvme_set_hotplug", 00:26:50.845 "params": { 00:26:50.845 "period_us": 100000, 00:26:50.845 "enable": false 00:26:50.845 } 00:26:50.845 }, 00:26:50.845 { 00:26:50.845 "method": "bdev_enable_histogram", 00:26:50.845 "params": { 00:26:50.845 "name": "nvme0n1", 00:26:50.845 "enable": true 00:26:50.845 } 00:26:50.845 }, 00:26:50.845 { 00:26:50.845 "method": "bdev_wait_for_examine" 00:26:50.845 } 00:26:50.845 ] 00:26:50.845 }, 00:26:50.845 { 00:26:50.845 "subsystem": "nbd", 00:26:50.845 "config": [] 00:26:50.845 } 00:26:50.845 ] 00:26:50.845 }' 00:26:50.845 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3076327 00:26:50.845 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3076327 ']' 00:26:50.845 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3076327 00:26:50.845 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:50.845 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:50.845 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3076327 00:26:50.845 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:50.845 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:50.845 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3076327' 00:26:50.845 killing process with pid 3076327 00:26:50.845 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3076327 00:26:50.845 Received shutdown signal, test time was about 1.000000 seconds 00:26:50.845 00:26:50.845 Latency(us) 00:26:50.845 [2024-10-07T12:37:14.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.845 [2024-10-07T12:37:14.554Z] =================================================================================================================== 00:26:50.845 [2024-10-07T12:37:14.554Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:51.107 14:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3076327 00:26:51.367 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3076131 00:26:51.367 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3076131 ']' 00:26:51.367 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3076131 00:26:51.367 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:51.629 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:51.629 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3076131 00:26:51.629 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:51.629 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:51.629 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3076131' 00:26:51.629 killing process with pid 3076131 00:26:51.629 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3076131 00:26:51.629 14:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3076131 00:26:52.571 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:26:52.571 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:52.571 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:52.571 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:26:52.571 "subsystems": [ 00:26:52.571 { 00:26:52.571 "subsystem": "keyring", 00:26:52.571 "config": [ 00:26:52.571 { 00:26:52.571 "method": "keyring_file_add_key", 00:26:52.571 "params": { 00:26:52.571 "name": "key0", 00:26:52.571 "path": "/tmp/tmp.awhQqek6oU" 00:26:52.571 } 00:26:52.571 } 00:26:52.571 ] 00:26:52.571 }, 00:26:52.571 { 00:26:52.571 "subsystem": "iobuf", 00:26:52.571 "config": [ 00:26:52.571 { 00:26:52.571 "method": "iobuf_set_options", 00:26:52.571 "params": { 00:26:52.571 "small_pool_count": 8192, 00:26:52.571 "large_pool_count": 1024, 00:26:52.571 "small_bufsize": 8192, 00:26:52.571 "large_bufsize": 135168 00:26:52.571 } 00:26:52.571 } 00:26:52.571 ] 00:26:52.571 }, 00:26:52.571 { 00:26:52.571 "subsystem": "sock", 00:26:52.571 "config": [ 00:26:52.571 { 00:26:52.571 "method": "sock_set_default_impl", 00:26:52.571 "params": { 00:26:52.571 "impl_name": "posix" 00:26:52.571 } 00:26:52.571 }, 00:26:52.571 { 00:26:52.571 "method": "sock_impl_set_options", 00:26:52.571 "params": { 00:26:52.571 "impl_name": "ssl", 00:26:52.571 "recv_buf_size": 4096, 00:26:52.571 "send_buf_size": 4096, 00:26:52.572 "enable_recv_pipe": true, 00:26:52.572 "enable_quickack": false, 00:26:52.572 "enable_placement_id": 0, 00:26:52.572 "enable_zerocopy_send_server": true, 00:26:52.572 "enable_zerocopy_send_client": false, 00:26:52.572 "zerocopy_threshold": 0, 00:26:52.572 "tls_version": 0, 00:26:52.572 "enable_ktls": false 00:26:52.572 } 00:26:52.572 }, 00:26:52.572 { 00:26:52.572 "method": "sock_impl_set_options", 00:26:52.572 "params": { 00:26:52.572 "impl_name": "posix", 00:26:52.572 "recv_buf_size": 2097152, 00:26:52.572 "send_buf_size": 2097152, 00:26:52.572 "enable_recv_pipe": true, 00:26:52.572 "enable_quickack": false, 00:26:52.572 "enable_placement_id": 0, 00:26:52.572 "enable_zerocopy_send_server": true, 00:26:52.572 "enable_zerocopy_send_client": false, 00:26:52.572 "zerocopy_threshold": 0, 00:26:52.572 "tls_version": 0, 00:26:52.572 "enable_ktls": false 00:26:52.572 } 00:26:52.572 } 00:26:52.572 ] 00:26:52.572 }, 00:26:52.572 { 00:26:52.572 "subsystem": "vmd", 00:26:52.572 "config": [] 00:26:52.572 }, 00:26:52.572 { 00:26:52.572 "subsystem": "accel", 00:26:52.572 "config": [ 00:26:52.572 { 00:26:52.572 "method": "accel_set_options", 00:26:52.572 "params": { 00:26:52.572 "small_cache_size": 128, 00:26:52.572 "large_cache_size": 16, 00:26:52.572 "task_count": 2048, 00:26:52.572 "sequence_count": 2048, 00:26:52.572 "buf_count": 2048 00:26:52.572 } 00:26:52.572 } 00:26:52.572 ] 00:26:52.572 }, 00:26:52.572 { 00:26:52.572 "subsystem": "bdev", 00:26:52.572 "config": [ 00:26:52.572 { 00:26:52.572 "method": "bdev_set_options", 00:26:52.572 "params": { 00:26:52.572 "bdev_io_pool_size": 65535, 00:26:52.572 "bdev_io_cache_size": 256, 00:26:52.572 "bdev_auto_examine": true, 00:26:52.572 "iobuf_small_cache_size": 128, 00:26:52.572 "iobuf_large_cache_size": 16 00:26:52.572 } 00:26:52.572 }, 00:26:52.572 { 00:26:52.572 "method": "bdev_raid_set_options", 00:26:52.572 "params": { 00:26:52.572 "process_window_size_kb": 1024, 00:26:52.572 "process_max_bandwidth_mb_sec": 0 00:26:52.572 } 00:26:52.572 }, 00:26:52.572 { 00:26:52.572 "method": "bdev_iscsi_set_options", 00:26:52.572 "params": { 00:26:52.572 "timeout_sec": 30 00:26:52.572 } 00:26:52.572 }, 00:26:52.572 { 00:26:52.572 "method": "bdev_nvme_set_options", 00:26:52.572 "params": { 00:26:52.572 "action_on_timeout": "none", 00:26:52.572 "timeout_us": 0, 00:26:52.572 "timeout_admin_us": 0, 00:26:52.572 "keep_alive_timeout_ms": 10000, 00:26:52.572 "arbitration_burst": 0, 00:26:52.572 "low_priority_weight": 0, 00:26:52.572 "medium_priority_weight": 0, 00:26:52.572 "high_priority_weight": 0, 00:26:52.572 "nvme_adminq_poll_period_us": 10000, 00:26:52.572 "nvme_ioq_poll_period_us": 0, 00:26:52.572 "io_queue_requests": 0, 00:26:52.572 "delay_cmd_submit": true, 00:26:52.572 "transport_retry_count": 4, 00:26:52.572 "bdev_retry_count": 3, 00:26:52.572 "transport_ack_timeout": 0, 00:26:52.572 "ctrlr_loss_timeout_sec": 0, 00:26:52.572 "reconnect_delay_sec": 0, 00:26:52.572 "fast_io_fail_timeout_sec": 0, 00:26:52.572 "disable_auto_failback": false, 00:26:52.572 "generate_uuids": false, 00:26:52.572 "transport_tos": 0, 00:26:52.572 "nvme_error_stat": false, 00:26:52.572 "rdma_srq_size": 0, 00:26:52.572 "io_path_stat": false, 00:26:52.572 "allow_accel_sequence": false, 00:26:52.572 "rdma_max_cq_size": 0, 00:26:52.572 "rdma_cm_event_timeout_ms": 0, 00:26:52.572 "dhchap_digests": [ 00:26:52.572 "sha256", 00:26:52.572 "sha384", 00:26:52.572 "sha512" 00:26:52.572 ], 00:26:52.572 "dhchap_dhgroups": [ 00:26:52.572 "null", 00:26:52.572 "ffdhe2048", 00:26:52.572 "ffdhe3072", 00:26:52.572 "ffdhe4096", 00:26:52.572 "ffdhe6144", 00:26:52.572 "ffdhe8192" 00:26:52.572 ] 00:26:52.572 } 00:26:52.572 }, 00:26:52.572 { 00:26:52.572 "method": "bdev_nvme_set_hotplug", 00:26:52.572 "params": { 00:26:52.572 "period_us": 100000, 00:26:52.572 "enable": false 00:26:52.572 } 00:26:52.572 }, 00:26:52.572 { 00:26:52.572 "method": "bdev_malloc_create", 00:26:52.572 "params": { 00:26:52.572 "name": "malloc0", 00:26:52.572 "num_blocks": 8192, 00:26:52.572 "block_size": 4096, 00:26:52.572 "physical_block_size": 4096, 00:26:52.572 "uuid": "7ab888fa-0252-4abb-b3de-4f44640a25c9", 00:26:52.572 "optimal_io_boundary": 0, 00:26:52.572 "md_size": 0, 00:26:52.572 "dif_type": 0, 00:26:52.572 "dif_is_head_of_md": false, 00:26:52.572 "dif_pi_format": 0 00:26:52.572 } 00:26:52.572 }, 00:26:52.572 { 00:26:52.572 "method": "bdev_wait_for_examine" 00:26:52.572 } 00:26:52.572 ] 00:26:52.572 }, 00:26:52.572 { 00:26:52.572 "subsystem": "nbd", 00:26:52.572 "config": [] 00:26:52.572 }, 00:26:52.572 { 00:26:52.572 "subsystem": "scheduler", 00:26:52.572 "config": [ 00:26:52.572 { 00:26:52.572 "method": "framework_set_scheduler", 00:26:52.572 "params": { 00:26:52.572 "name": "static" 00:26:52.572 } 00:26:52.572 } 00:26:52.572 ] 00:26:52.572 }, 00:26:52.572 { 00:26:52.572 "subsystem": "nvmf", 00:26:52.572 "config": [ 00:26:52.572 { 00:26:52.572 "method": "nvmf_set_config", 00:26:52.572 "params": { 00:26:52.572 "discovery_filter": "match_any", 00:26:52.572 "admin_cmd_passthru": { 00:26:52.572 "identify_ctrlr": false 00:26:52.572 }, 00:26:52.572 "dhchap_digests": [ 00:26:52.572 "sha256", 00:26:52.572 "sha384", 00:26:52.572 "sha512" 00:26:52.572 ], 00:26:52.572 "dhchap_dhgroups": [ 00:26:52.572 "null", 00:26:52.572 "ffdhe2048", 00:26:52.572 "ffdhe3072", 00:26:52.572 "ffdhe4096", 00:26:52.572 "ffdhe6144", 00:26:52.572 "ffdhe8192" 00:26:52.572 ] 00:26:52.572 } 00:26:52.572 }, 00:26:52.572 { 00:26:52.572 "method": "nvmf_set_max_subsystems", 00:26:52.572 "params": { 00:26:52.572 "max_subsystems": 1024 00:26:52.572 } 00:26:52.572 }, 00:26:52.572 { 00:26:52.572 "method": "nvmf_set_crdt", 00:26:52.572 "params": { 00:26:52.572 "crdt1": 0, 00:26:52.572 "crdt2": 0, 00:26:52.572 "crdt3": 0 00:26:52.572 } 00:26:52.572 }, 00:26:52.572 { 00:26:52.572 "method": "nvmf_create_transport", 00:26:52.572 "params": { 00:26:52.572 "trtype": "TCP", 00:26:52.572 "max_queue_depth": 128, 00:26:52.572 "max_io_qpairs_per_ctrlr": 127, 00:26:52.572 "in_capsule_data_size": 4096, 00:26:52.572 "max_io_size": 131072, 00:26:52.572 "io_unit_size": 131072, 00:26:52.572 "max_aq_depth": 128, 00:26:52.572 "num_shared_buffers": 511, 00:26:52.572 "buf_cache_size": 4294967295, 00:26:52.572 "dif_insert_or_strip": false, 00:26:52.572 "zcopy": false, 00:26:52.572 "c2h_success": false, 00:26:52.572 "sock_priority": 0, 00:26:52.572 "abort_timeout_sec": 1, 00:26:52.572 "ack_timeout": 0, 00:26:52.572 "data_wr_pool_size": 0 00:26:52.572 } 00:26:52.572 }, 00:26:52.572 { 00:26:52.572 "method": "nvmf_create_subsystem", 00:26:52.572 "params": { 00:26:52.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.572 "allow_any_host": false, 00:26:52.572 "serial_number": "00000000000000000000", 00:26:52.572 "model_number": "SPDK bdev Controller", 00:26:52.572 "max_namespaces": 32, 00:26:52.572 "min_cntlid": 1, 00:26:52.572 "max_cntlid": 65519, 00:26:52.573 "ana_reporting": false 00:26:52.573 } 00:26:52.573 }, 00:26:52.573 { 00:26:52.573 "method": "nvmf_subsystem_add_host", 00:26:52.573 "params": { 00:26:52.573 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.573 "host": "nqn.2016-06.io.spdk:host1", 00:26:52.573 "psk": "key0" 00:26:52.573 } 00:26:52.573 }, 00:26:52.573 { 00:26:52.573 "method": "nvmf_subsystem_add_ns", 00:26:52.573 "params": { 00:26:52.573 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.573 "namespace": { 00:26:52.573 "nsid": 1, 00:26:52.573 "bdev_name": "malloc0", 00:26:52.573 "nguid": "7AB888FA02524ABBB3DE4F44640A25C9", 00:26:52.573 "uuid": "7ab888fa-0252-4abb-b3de-4f44640a25c9", 00:26:52.573 "no_auto_visible": false 00:26:52.573 } 00:26:52.573 } 00:26:52.573 }, 00:26:52.573 { 00:26:52.573 "method": "nvmf_subsystem_add_listener", 00:26:52.573 "params": { 00:26:52.573 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.573 "listen_address": { 00:26:52.573 "trtype": "TCP", 00:26:52.573 "adrfam": "IPv4", 00:26:52.573 "traddr": "10.0.0.2", 00:26:52.573 "trsvcid": "4420" 00:26:52.573 }, 00:26:52.573 "secure_channel": false, 00:26:52.573 "sock_impl": "ssl" 00:26:52.573 } 00:26:52.573 } 00:26:52.573 ] 00:26:52.573 } 00:26:52.573 ] 00:26:52.573 }' 00:26:52.573 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:52.573 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3077177 00:26:52.573 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3077177 00:26:52.573 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:26:52.573 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3077177 ']' 00:26:52.573 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.573 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:52.573 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.573 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:52.573 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:52.573 [2024-10-07 14:37:16.191551] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:26:52.573 [2024-10-07 14:37:16.191669] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:52.833 [2024-10-07 14:37:16.325552] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.833 [2024-10-07 14:37:16.506459] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:52.833 [2024-10-07 14:37:16.506503] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:52.833 [2024-10-07 14:37:16.506515] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:52.833 [2024-10-07 14:37:16.506526] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:52.833 [2024-10-07 14:37:16.506536] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:52.833 [2024-10-07 14:37:16.507817] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.404 [2024-10-07 14:37:16.926981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:53.404 [2024-10-07 14:37:16.958992] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:53.404 [2024-10-07 14:37:16.959260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:53.404 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:53.404 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:53.404 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:53.404 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:53.404 14:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:53.404 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.404 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3077415 00:26:53.404 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3077415 /var/tmp/bdevperf.sock 00:26:53.404 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3077415 ']' 00:26:53.404 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:53.404 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:53.404 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:53.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:53.404 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:26:53.404 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:53.404 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:53.404 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:26:53.404 "subsystems": [ 00:26:53.404 { 00:26:53.404 "subsystem": "keyring", 00:26:53.404 "config": [ 00:26:53.404 { 00:26:53.404 "method": "keyring_file_add_key", 00:26:53.404 "params": { 00:26:53.404 "name": "key0", 00:26:53.404 "path": "/tmp/tmp.awhQqek6oU" 00:26:53.404 } 00:26:53.404 } 00:26:53.404 ] 00:26:53.404 }, 00:26:53.404 { 00:26:53.404 "subsystem": "iobuf", 00:26:53.404 "config": [ 00:26:53.404 { 00:26:53.404 "method": "iobuf_set_options", 00:26:53.404 "params": { 00:26:53.404 "small_pool_count": 8192, 00:26:53.404 "large_pool_count": 1024, 00:26:53.404 "small_bufsize": 8192, 00:26:53.404 "large_bufsize": 135168 00:26:53.404 } 00:26:53.404 } 00:26:53.404 ] 00:26:53.404 }, 00:26:53.404 { 00:26:53.404 "subsystem": "sock", 00:26:53.404 "config": [ 00:26:53.404 { 00:26:53.404 "method": "sock_set_default_impl", 00:26:53.404 "params": { 00:26:53.404 "impl_name": "posix" 00:26:53.404 } 00:26:53.404 }, 00:26:53.404 { 00:26:53.404 "method": "sock_impl_set_options", 00:26:53.404 "params": { 00:26:53.404 "impl_name": "ssl", 00:26:53.404 "recv_buf_size": 4096, 00:26:53.404 "send_buf_size": 4096, 00:26:53.404 "enable_recv_pipe": true, 00:26:53.404 "enable_quickack": false, 00:26:53.404 "enable_placement_id": 0, 00:26:53.404 "enable_zerocopy_send_server": true, 00:26:53.404 "enable_zerocopy_send_client": false, 00:26:53.404 "zerocopy_threshold": 0, 00:26:53.404 "tls_version": 0, 00:26:53.404 "enable_ktls": false 00:26:53.404 } 00:26:53.404 }, 00:26:53.404 { 00:26:53.404 "method": "sock_impl_set_options", 00:26:53.404 "params": { 00:26:53.404 "impl_name": "posix", 00:26:53.404 "recv_buf_size": 2097152, 00:26:53.404 "send_buf_size": 2097152, 00:26:53.404 "enable_recv_pipe": true, 00:26:53.404 "enable_quickack": false, 00:26:53.404 "enable_placement_id": 0, 00:26:53.404 "enable_zerocopy_send_server": true, 00:26:53.404 "enable_zerocopy_send_client": false, 00:26:53.404 "zerocopy_threshold": 0, 00:26:53.404 "tls_version": 0, 00:26:53.404 "enable_ktls": false 00:26:53.404 } 00:26:53.404 } 00:26:53.404 ] 00:26:53.404 }, 00:26:53.404 { 00:26:53.404 "subsystem": "vmd", 00:26:53.404 "config": [] 00:26:53.404 }, 00:26:53.404 { 00:26:53.404 "subsystem": "accel", 00:26:53.404 "config": [ 00:26:53.404 { 00:26:53.404 "method": "accel_set_options", 00:26:53.404 "params": { 00:26:53.404 "small_cache_size": 128, 00:26:53.404 "large_cache_size": 16, 00:26:53.404 "task_count": 2048, 00:26:53.404 "sequence_count": 2048, 00:26:53.404 "buf_count": 2048 00:26:53.404 } 00:26:53.404 } 00:26:53.404 ] 00:26:53.404 }, 00:26:53.404 { 00:26:53.404 "subsystem": "bdev", 00:26:53.404 "config": [ 00:26:53.404 { 00:26:53.404 "method": "bdev_set_options", 00:26:53.404 "params": { 00:26:53.404 "bdev_io_pool_size": 65535, 00:26:53.404 "bdev_io_cache_size": 256, 00:26:53.404 "bdev_auto_examine": true, 00:26:53.404 "iobuf_small_cache_size": 128, 00:26:53.404 "iobuf_large_cache_size": 16 00:26:53.404 } 00:26:53.404 }, 00:26:53.404 { 00:26:53.404 "method": "bdev_raid_set_options", 00:26:53.404 "params": { 00:26:53.404 "process_window_size_kb": 1024, 00:26:53.404 "process_max_bandwidth_mb_sec": 0 00:26:53.404 } 00:26:53.404 }, 00:26:53.404 { 00:26:53.404 "method": "bdev_iscsi_set_options", 00:26:53.404 "params": { 00:26:53.405 "timeout_sec": 30 00:26:53.405 } 00:26:53.405 }, 00:26:53.405 { 00:26:53.405 "method": "bdev_nvme_set_options", 00:26:53.405 "params": { 00:26:53.405 "action_on_timeout": "none", 00:26:53.405 "timeout_us": 0, 00:26:53.405 "timeout_admin_us": 0, 00:26:53.405 "keep_alive_timeout_ms": 10000, 00:26:53.405 "arbitration_burst": 0, 00:26:53.405 "low_priority_weight": 0, 00:26:53.405 "medium_priority_weight": 0, 00:26:53.405 "high_priority_weight": 0, 00:26:53.405 "nvme_adminq_poll_period_us": 10000, 00:26:53.405 "nvme_ioq_poll_period_us": 0, 00:26:53.405 "io_queue_requests": 512, 00:26:53.405 "delay_cmd_submit": true, 00:26:53.405 "transport_retry_count": 4, 00:26:53.405 "bdev_retry_count": 3, 00:26:53.405 "transport_ack_timeout": 0, 00:26:53.405 "ctrlr_loss_timeout_sec": 0, 00:26:53.405 "reconnect_delay_sec": 0, 00:26:53.405 "fast_io_fail_timeout_sec": 0, 00:26:53.405 "disable_auto_failback": false, 00:26:53.405 "generate_uuids": false, 00:26:53.405 "transport_tos": 0, 00:26:53.405 "nvme_error_stat": false, 00:26:53.405 "rdma_srq_size": 0, 00:26:53.405 "io_path_stat": false, 00:26:53.405 "allow_accel_sequence": false, 00:26:53.405 "rdma_max_cq_size": 0, 00:26:53.405 "rdma_cm_event_timeout_ms": 0, 00:26:53.405 "dhchap_digests": [ 00:26:53.405 "sha256", 00:26:53.405 "sha384", 00:26:53.405 "sha512" 00:26:53.405 ], 00:26:53.405 "dhchap_dhgroups": [ 00:26:53.405 "null", 00:26:53.405 "ffdhe2048", 00:26:53.405 "ffdhe3072", 00:26:53.405 "ffdhe4096", 00:26:53.405 "ffdhe6144", 00:26:53.405 "ffdhe8192" 00:26:53.405 ] 00:26:53.405 } 00:26:53.405 }, 00:26:53.405 { 00:26:53.405 "method": "bdev_nvme_attach_controller", 00:26:53.405 "params": { 00:26:53.405 "name": "nvme0", 00:26:53.405 "trtype": "TCP", 00:26:53.405 "adrfam": "IPv4", 00:26:53.405 "traddr": "10.0.0.2", 00:26:53.405 "trsvcid": "4420", 00:26:53.405 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:53.405 "prchk_reftag": false, 00:26:53.405 "prchk_guard": false, 00:26:53.405 "ctrlr_loss_timeout_sec": 0, 00:26:53.405 "reconnect_delay_sec": 0, 00:26:53.405 "fast_io_fail_timeout_sec": 0, 00:26:53.405 "psk": "key0", 00:26:53.405 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:53.405 "hdgst": false, 00:26:53.405 "ddgst": false 00:26:53.405 } 00:26:53.405 }, 00:26:53.405 { 00:26:53.405 "method": "bdev_nvme_set_hotplug", 00:26:53.405 "params": { 00:26:53.405 "period_us": 100000, 00:26:53.405 "enable": false 00:26:53.405 } 00:26:53.405 }, 00:26:53.405 { 00:26:53.405 "method": "bdev_enable_histogram", 00:26:53.405 "params": { 00:26:53.405 "name": "nvme0n1", 00:26:53.405 "enable": true 00:26:53.405 } 00:26:53.405 }, 00:26:53.405 { 00:26:53.405 "method": "bdev_wait_for_examine" 00:26:53.405 } 00:26:53.405 ] 00:26:53.405 }, 00:26:53.405 { 00:26:53.405 "subsystem": "nbd", 00:26:53.405 "config": [] 00:26:53.405 } 00:26:53.405 ] 00:26:53.405 }' 00:26:53.665 [2024-10-07 14:37:17.120167] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:26:53.665 [2024-10-07 14:37:17.120276] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3077415 ] 00:26:53.665 [2024-10-07 14:37:17.245189] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.926 [2024-10-07 14:37:17.382338] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.188 [2024-10-07 14:37:17.640739] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:54.188 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:54.188 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:26:54.188 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:54.188 14:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:26:54.449 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.449 14:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:54.449 Running I/O for 1 seconds... 00:26:55.726 3851.00 IOPS, 15.04 MiB/s 00:26:55.726 Latency(us) 00:26:55.726 [2024-10-07T12:37:19.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.726 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:55.726 Verification LBA range: start 0x0 length 0x2000 00:26:55.726 nvme0n1 : 1.03 3874.50 15.13 0.00 0.00 32661.11 6744.75 33423.36 00:26:55.726 [2024-10-07T12:37:19.435Z] =================================================================================================================== 00:26:55.726 [2024-10-07T12:37:19.435Z] Total : 3874.50 15.13 0.00 0.00 32661.11 6744.75 33423.36 00:26:55.726 { 00:26:55.726 "results": [ 00:26:55.726 { 00:26:55.726 "job": "nvme0n1", 00:26:55.726 "core_mask": "0x2", 00:26:55.726 "workload": "verify", 00:26:55.726 "status": "finished", 00:26:55.726 "verify_range": { 00:26:55.726 "start": 0, 00:26:55.726 "length": 8192 00:26:55.726 }, 00:26:55.726 "queue_depth": 128, 00:26:55.726 "io_size": 4096, 00:26:55.726 "runtime": 1.026972, 00:26:55.726 "iops": 3874.4970651585436, 00:26:55.726 "mibps": 15.134754160775561, 00:26:55.726 "io_failed": 0, 00:26:55.726 "io_timeout": 0, 00:26:55.726 "avg_latency_us": 32661.110081259947, 00:26:55.726 "min_latency_us": 6744.746666666667, 00:26:55.726 "max_latency_us": 33423.36 00:26:55.726 } 00:26:55.726 ], 00:26:55.726 "core_count": 1 00:26:55.726 } 00:26:55.726 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:26:55.726 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:26:55.726 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:26:55.726 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:26:55.726 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:26:55.727 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:26:55.727 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:55.727 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:26:55.727 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:26:55.727 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:26:55.727 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:55.727 nvmf_trace.0 00:26:55.727 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:26:55.727 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3077415 00:26:55.727 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3077415 ']' 00:26:55.727 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3077415 00:26:55.727 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:55.727 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:55.727 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3077415 00:26:55.727 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:55.727 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:55.727 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3077415' 00:26:55.727 killing process with pid 3077415 00:26:55.727 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3077415 00:26:55.727 Received shutdown signal, test time was about 1.000000 seconds 00:26:55.727 00:26:55.727 Latency(us) 00:26:55.727 [2024-10-07T12:37:19.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.727 [2024-10-07T12:37:19.436Z] =================================================================================================================== 00:26:55.727 [2024-10-07T12:37:19.436Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:55.727 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3077415 00:26:56.336 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:26:56.336 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:56.336 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:26:56.336 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:56.336 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:26:56.336 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:56.336 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:56.336 rmmod nvme_tcp 00:26:56.336 rmmod nvme_fabrics 00:26:56.336 rmmod nvme_keyring 00:26:56.336 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:56.336 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:26:56.336 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:26:56.336 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 3077177 ']' 00:26:56.336 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 3077177 00:26:56.336 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3077177 ']' 00:26:56.336 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3077177 00:26:56.336 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:26:56.336 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:56.336 14:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3077177 00:26:56.336 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:56.336 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:56.336 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3077177' 00:26:56.336 killing process with pid 3077177 00:26:56.336 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3077177 00:26:56.336 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3077177 00:26:57.279 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:57.279 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:57.279 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:57.279 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:26:57.279 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:26:57.279 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:57.279 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:26:57.279 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:57.279 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:57.279 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.280 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.280 14:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.deuSSIW9HJ /tmp/tmp.uCGnxUiiem /tmp/tmp.awhQqek6oU 00:26:59.829 00:26:59.829 real 1m39.131s 00:26:59.829 user 2m32.725s 00:26:59.829 sys 0m29.450s 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:59.829 ************************************ 00:26:59.829 END TEST nvmf_tls 00:26:59.829 ************************************ 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:59.829 ************************************ 00:26:59.829 START TEST nvmf_fips 00:26:59.829 ************************************ 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:59.829 * Looking for test storage... 00:26:59.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:59.829 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:59.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.830 --rc genhtml_branch_coverage=1 00:26:59.830 --rc genhtml_function_coverage=1 00:26:59.830 --rc genhtml_legend=1 00:26:59.830 --rc geninfo_all_blocks=1 00:26:59.830 --rc geninfo_unexecuted_blocks=1 00:26:59.830 00:26:59.830 ' 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:59.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.830 --rc genhtml_branch_coverage=1 00:26:59.830 --rc genhtml_function_coverage=1 00:26:59.830 --rc genhtml_legend=1 00:26:59.830 --rc geninfo_all_blocks=1 00:26:59.830 --rc geninfo_unexecuted_blocks=1 00:26:59.830 00:26:59.830 ' 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:59.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.830 --rc genhtml_branch_coverage=1 00:26:59.830 --rc genhtml_function_coverage=1 00:26:59.830 --rc genhtml_legend=1 00:26:59.830 --rc geninfo_all_blocks=1 00:26:59.830 --rc geninfo_unexecuted_blocks=1 00:26:59.830 00:26:59.830 ' 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:59.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:59.830 --rc genhtml_branch_coverage=1 00:26:59.830 --rc genhtml_function_coverage=1 00:26:59.830 --rc genhtml_legend=1 00:26:59.830 --rc geninfo_all_blocks=1 00:26:59.830 --rc geninfo_unexecuted_blocks=1 00:26:59.830 00:26:59.830 ' 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:59.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:59.830 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:26:59.831 Error setting digest 00:26:59.831 40921CFCB77F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:26:59.831 40921CFCB77F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:26:59.831 14:37:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.978 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:07.979 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:07.979 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:07.979 Found net devices under 0000:31:00.0: cvl_0_0 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:07.979 Found net devices under 0000:31:00.1: cvl_0_1 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:07.979 14:37:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:07.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.508 ms 00:27:07.979 00:27:07.979 --- 10.0.0.2 ping statistics --- 00:27:07.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.979 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:07.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:27:07.979 00:27:07.979 --- 10.0.0.1 ping statistics --- 00:27:07.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.979 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=3082434 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 3082434 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3082434 ']' 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:07.979 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:27:07.979 [2024-10-07 14:37:31.243671] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:27:07.979 [2024-10-07 14:37:31.243809] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.979 [2024-10-07 14:37:31.400793] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.979 [2024-10-07 14:37:31.626117] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.979 [2024-10-07 14:37:31.626198] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.980 [2024-10-07 14:37:31.626211] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.980 [2024-10-07 14:37:31.626223] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.980 [2024-10-07 14:37:31.626233] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.980 [2024-10-07 14:37:31.627721] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.553 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:08.553 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:27:08.553 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:08.553 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:08.553 14:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:27:08.553 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.553 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:27:08.553 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:27:08.553 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:27:08.553 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.KS2 00:27:08.553 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:27:08.553 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.KS2 00:27:08.553 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.KS2 00:27:08.553 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.KS2 00:27:08.553 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:08.553 [2024-10-07 14:37:32.216732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.553 [2024-10-07 14:37:32.232722] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:08.553 [2024-10-07 14:37:32.233089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.815 malloc0 00:27:08.815 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:08.815 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3082664 00:27:08.815 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3082664 /var/tmp/bdevperf.sock 00:27:08.815 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:08.815 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3082664 ']' 00:27:08.815 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:08.815 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:08.815 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:08.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:08.815 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:08.815 14:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:27:08.815 [2024-10-07 14:37:32.457711] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:27:08.815 [2024-10-07 14:37:32.457844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3082664 ] 00:27:09.076 [2024-10-07 14:37:32.577170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.076 [2024-10-07 14:37:32.718779] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:09.649 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:09.649 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:27:09.649 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.KS2 00:27:09.910 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:27:09.910 [2024-10-07 14:37:33.528386] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:09.910 TLSTESTn1 00:27:10.171 14:37:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:10.171 Running I/O for 10 seconds... 00:27:12.058 4542.00 IOPS, 17.74 MiB/s [2024-10-07T12:37:37.154Z] 4936.00 IOPS, 19.28 MiB/s [2024-10-07T12:37:38.096Z] 4802.67 IOPS, 18.76 MiB/s [2024-10-07T12:37:39.036Z] 4765.75 IOPS, 18.62 MiB/s [2024-10-07T12:37:39.982Z] 4869.80 IOPS, 19.02 MiB/s [2024-10-07T12:37:40.923Z] 4848.67 IOPS, 18.94 MiB/s [2024-10-07T12:37:41.865Z] 4899.00 IOPS, 19.14 MiB/s [2024-10-07T12:37:42.804Z] 4832.12 IOPS, 18.88 MiB/s [2024-10-07T12:37:43.746Z] 4793.11 IOPS, 18.72 MiB/s [2024-10-07T12:37:44.007Z] 4835.20 IOPS, 18.89 MiB/s 00:27:20.298 Latency(us) 00:27:20.298 [2024-10-07T12:37:44.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.298 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:20.298 Verification LBA range: start 0x0 length 0x2000 00:27:20.298 TLSTESTn1 : 10.03 4835.16 18.89 0.00 0.00 26427.18 8082.77 65099.09 00:27:20.298 [2024-10-07T12:37:44.007Z] =================================================================================================================== 00:27:20.298 [2024-10-07T12:37:44.007Z] Total : 4835.16 18.89 0.00 0.00 26427.18 8082.77 65099.09 00:27:20.298 { 00:27:20.298 "results": [ 00:27:20.298 { 00:27:20.298 "job": "TLSTESTn1", 00:27:20.298 "core_mask": "0x4", 00:27:20.298 "workload": "verify", 00:27:20.299 "status": "finished", 00:27:20.299 "verify_range": { 00:27:20.299 "start": 0, 00:27:20.299 "length": 8192 00:27:20.299 }, 00:27:20.299 "queue_depth": 128, 00:27:20.299 "io_size": 4096, 00:27:20.299 "runtime": 10.026547, 00:27:20.299 "iops": 4835.164089890567, 00:27:20.299 "mibps": 18.887359726135028, 00:27:20.299 "io_failed": 0, 00:27:20.299 "io_timeout": 0, 00:27:20.299 "avg_latency_us": 26427.175709570958, 00:27:20.299 "min_latency_us": 8082.7733333333335, 00:27:20.299 "max_latency_us": 65099.09333333333 00:27:20.299 } 00:27:20.299 ], 00:27:20.299 "core_count": 1 00:27:20.299 } 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:27:20.299 nvmf_trace.0 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3082664 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3082664 ']' 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3082664 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3082664 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3082664' 00:27:20.299 killing process with pid 3082664 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3082664 00:27:20.299 Received shutdown signal, test time was about 10.000000 seconds 00:27:20.299 00:27:20.299 Latency(us) 00:27:20.299 [2024-10-07T12:37:44.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.299 [2024-10-07T12:37:44.008Z] =================================================================================================================== 00:27:20.299 [2024-10-07T12:37:44.008Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:20.299 14:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3082664 00:27:20.870 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:27:20.870 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:20.870 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:27:20.870 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:20.870 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:27:20.870 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:20.870 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:20.870 rmmod nvme_tcp 00:27:20.870 rmmod nvme_fabrics 00:27:20.870 rmmod nvme_keyring 00:27:20.870 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:21.130 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:27:21.130 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:27:21.130 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 3082434 ']' 00:27:21.130 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 3082434 00:27:21.130 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3082434 ']' 00:27:21.130 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3082434 00:27:21.130 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:27:21.130 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:21.130 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3082434 00:27:21.130 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:21.130 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:21.130 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3082434' 00:27:21.130 killing process with pid 3082434 00:27:21.130 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3082434 00:27:21.130 14:37:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3082434 00:27:21.703 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:21.703 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:21.703 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:21.703 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:27:21.703 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:27:21.703 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:27:21.703 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:21.703 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:21.703 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:21.703 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.703 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.703 14:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.248 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:24.248 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.KS2 00:27:24.248 00:27:24.248 real 0m24.311s 00:27:24.248 user 0m26.236s 00:27:24.248 sys 0m9.758s 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:27:24.249 ************************************ 00:27:24.249 END TEST nvmf_fips 00:27:24.249 ************************************ 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:24.249 ************************************ 00:27:24.249 START TEST nvmf_control_msg_list 00:27:24.249 ************************************ 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:27:24.249 * Looking for test storage... 00:27:24.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:24.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.249 --rc genhtml_branch_coverage=1 00:27:24.249 --rc genhtml_function_coverage=1 00:27:24.249 --rc genhtml_legend=1 00:27:24.249 --rc geninfo_all_blocks=1 00:27:24.249 --rc geninfo_unexecuted_blocks=1 00:27:24.249 00:27:24.249 ' 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:24.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.249 --rc genhtml_branch_coverage=1 00:27:24.249 --rc genhtml_function_coverage=1 00:27:24.249 --rc genhtml_legend=1 00:27:24.249 --rc geninfo_all_blocks=1 00:27:24.249 --rc geninfo_unexecuted_blocks=1 00:27:24.249 00:27:24.249 ' 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:24.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.249 --rc genhtml_branch_coverage=1 00:27:24.249 --rc genhtml_function_coverage=1 00:27:24.249 --rc genhtml_legend=1 00:27:24.249 --rc geninfo_all_blocks=1 00:27:24.249 --rc geninfo_unexecuted_blocks=1 00:27:24.249 00:27:24.249 ' 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:24.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.249 --rc genhtml_branch_coverage=1 00:27:24.249 --rc genhtml_function_coverage=1 00:27:24.249 --rc genhtml_legend=1 00:27:24.249 --rc geninfo_all_blocks=1 00:27:24.249 --rc geninfo_unexecuted_blocks=1 00:27:24.249 00:27:24.249 ' 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.249 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:24.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:27:24.250 14:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:32.390 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:32.391 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:32.391 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:32.391 Found net devices under 0000:31:00.0: cvl_0_0 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:32.391 Found net devices under 0000:31:00.1: cvl_0_1 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:32.391 14:37:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:32.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:27:32.391 00:27:32.391 --- 10.0.0.2 ping statistics --- 00:27:32.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.391 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:32.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:27:32.391 00:27:32.391 --- 10.0.0.1 ping statistics --- 00:27:32.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.391 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=3089411 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 3089411 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 3089411 ']' 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:32.391 14:37:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:32.392 [2024-10-07 14:37:55.439258] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:27:32.392 [2024-10-07 14:37:55.439389] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.392 [2024-10-07 14:37:55.579524] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.392 [2024-10-07 14:37:55.764033] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.392 [2024-10-07 14:37:55.764083] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.392 [2024-10-07 14:37:55.764094] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.392 [2024-10-07 14:37:55.764106] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.392 [2024-10-07 14:37:55.764115] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.392 [2024-10-07 14:37:55.765346] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:32.652 [2024-10-07 14:37:56.235047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:32.652 Malloc0 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:32.652 [2024-10-07 14:37:56.316421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3089755 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3089756 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3089757 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3089755 00:27:32.652 14:37:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:32.914 [2024-10-07 14:37:56.427315] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:32.914 [2024-10-07 14:37:56.437193] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:32.914 [2024-10-07 14:37:56.447163] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:34.298 Initializing NVMe Controllers 00:27:34.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:27:34.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:27:34.298 Initialization complete. Launching workers. 00:27:34.298 ======================================================== 00:27:34.298 Latency(us) 00:27:34.298 Device Information : IOPS MiB/s Average min max 00:27:34.298 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40919.29 40780.81 41327.12 00:27:34.298 ======================================================== 00:27:34.298 Total : 25.00 0.10 40919.29 40780.81 41327.12 00:27:34.298 00:27:34.298 Initializing NVMe Controllers 00:27:34.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:27:34.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:27:34.298 Initialization complete. Launching workers. 00:27:34.298 ======================================================== 00:27:34.298 Latency(us) 00:27:34.298 Device Information : IOPS MiB/s Average min max 00:27:34.298 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40917.30 40779.50 41217.44 00:27:34.298 ======================================================== 00:27:34.298 Total : 25.00 0.10 40917.30 40779.50 41217.44 00:27:34.298 00:27:34.298 Initializing NVMe Controllers 00:27:34.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:27:34.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:27:34.298 Initialization complete. Launching workers. 00:27:34.298 ======================================================== 00:27:34.298 Latency(us) 00:27:34.298 Device Information : IOPS MiB/s Average min max 00:27:34.298 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40910.86 40825.00 41115.79 00:27:34.298 ======================================================== 00:27:34.298 Total : 25.00 0.10 40910.86 40825.00 41115.79 00:27:34.298 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3089756 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3089757 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:34.298 rmmod nvme_tcp 00:27:34.298 rmmod nvme_fabrics 00:27:34.298 rmmod nvme_keyring 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 3089411 ']' 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 3089411 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 3089411 ']' 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 3089411 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3089411 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3089411' 00:27:34.298 killing process with pid 3089411 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 3089411 00:27:34.298 14:37:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 3089411 00:27:35.240 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:35.240 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:35.240 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:35.240 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:27:35.240 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:27:35.240 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:35.240 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:27:35.240 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:35.240 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:35.240 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.240 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.240 14:37:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.782 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:37.782 00:27:37.782 real 0m13.417s 00:27:37.782 user 0m9.215s 00:27:37.782 sys 0m6.724s 00:27:37.782 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:37.782 14:38:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:27:37.782 ************************************ 00:27:37.782 END TEST nvmf_control_msg_list 00:27:37.782 ************************************ 00:27:37.782 14:38:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:27:37.782 14:38:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:37.782 14:38:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:37.782 14:38:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:37.782 ************************************ 00:27:37.782 START TEST nvmf_wait_for_buf 00:27:37.782 ************************************ 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:27:37.782 * Looking for test storage... 00:27:37.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:37.782 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:37.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.783 --rc genhtml_branch_coverage=1 00:27:37.783 --rc genhtml_function_coverage=1 00:27:37.783 --rc genhtml_legend=1 00:27:37.783 --rc geninfo_all_blocks=1 00:27:37.783 --rc geninfo_unexecuted_blocks=1 00:27:37.783 00:27:37.783 ' 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:37.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.783 --rc genhtml_branch_coverage=1 00:27:37.783 --rc genhtml_function_coverage=1 00:27:37.783 --rc genhtml_legend=1 00:27:37.783 --rc geninfo_all_blocks=1 00:27:37.783 --rc geninfo_unexecuted_blocks=1 00:27:37.783 00:27:37.783 ' 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:37.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.783 --rc genhtml_branch_coverage=1 00:27:37.783 --rc genhtml_function_coverage=1 00:27:37.783 --rc genhtml_legend=1 00:27:37.783 --rc geninfo_all_blocks=1 00:27:37.783 --rc geninfo_unexecuted_blocks=1 00:27:37.783 00:27:37.783 ' 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:37.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.783 --rc genhtml_branch_coverage=1 00:27:37.783 --rc genhtml_function_coverage=1 00:27:37.783 --rc genhtml_legend=1 00:27:37.783 --rc geninfo_all_blocks=1 00:27:37.783 --rc geninfo_unexecuted_blocks=1 00:27:37.783 00:27:37.783 ' 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:37.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:37.783 14:38:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:45.921 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:45.921 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:45.921 Found net devices under 0000:31:00.0: cvl_0_0 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:45.921 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:45.922 Found net devices under 0000:31:00.1: cvl_0_1 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:45.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:27:45.922 00:27:45.922 --- 10.0.0.2 ping statistics --- 00:27:45.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.922 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:27:45.922 00:27:45.922 --- 10.0.0.1 ping statistics --- 00:27:45.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.922 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=3094437 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 3094437 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 3094437 ']' 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:45.922 14:38:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:45.922 [2024-10-07 14:38:08.630128] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:27:45.922 [2024-10-07 14:38:08.630264] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.922 [2024-10-07 14:38:08.769949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.922 [2024-10-07 14:38:08.950070] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.922 [2024-10-07 14:38:08.950119] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.922 [2024-10-07 14:38:08.950131] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.922 [2024-10-07 14:38:08.950142] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.922 [2024-10-07 14:38:08.950151] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.922 [2024-10-07 14:38:08.951380] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.922 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:46.216 Malloc0 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:46.216 [2024-10-07 14:38:09.664624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:46.216 [2024-10-07 14:38:09.688850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.216 14:38:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:46.216 [2024-10-07 14:38:09.804372] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:47.600 Initializing NVMe Controllers 00:27:47.600 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:27:47.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:27:47.600 Initialization complete. Launching workers. 00:27:47.600 ======================================================== 00:27:47.600 Latency(us) 00:27:47.600 Device Information : IOPS MiB/s Average min max 00:27:47.600 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 26.00 3.25 161119.55 47780.15 194544.20 00:27:47.600 ======================================================== 00:27:47.600 Total : 26.00 3.25 161119.55 47780.15 194544.20 00:27:47.600 00:27:47.600 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:27:47.600 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:27:47.600 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.600 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:47.600 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=390 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 390 -eq 0 ]] 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:47.861 rmmod nvme_tcp 00:27:47.861 rmmod nvme_fabrics 00:27:47.861 rmmod nvme_keyring 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 3094437 ']' 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 3094437 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 3094437 ']' 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 3094437 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3094437 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3094437' 00:27:47.861 killing process with pid 3094437 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 3094437 00:27:47.861 14:38:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 3094437 00:27:48.804 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:48.804 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:48.804 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:48.804 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:27:48.804 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:48.804 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:27:48.804 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:27:48.804 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:48.804 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:48.804 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.804 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.804 14:38:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.716 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:50.716 00:27:50.716 real 0m13.322s 00:27:50.716 user 0m5.706s 00:27:50.716 sys 0m6.120s 00:27:50.716 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:50.716 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:27:50.716 ************************************ 00:27:50.716 END TEST nvmf_wait_for_buf 00:27:50.716 ************************************ 00:27:50.716 14:38:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:27:50.716 14:38:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:27:50.716 14:38:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:50.716 14:38:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:50.716 14:38:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:50.716 ************************************ 00:27:50.716 START TEST nvmf_fuzz 00:27:50.716 ************************************ 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:27:50.978 * Looking for test storage... 00:27:50.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:50.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.978 --rc genhtml_branch_coverage=1 00:27:50.978 --rc genhtml_function_coverage=1 00:27:50.978 --rc genhtml_legend=1 00:27:50.978 --rc geninfo_all_blocks=1 00:27:50.978 --rc geninfo_unexecuted_blocks=1 00:27:50.978 00:27:50.978 ' 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:50.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.978 --rc genhtml_branch_coverage=1 00:27:50.978 --rc genhtml_function_coverage=1 00:27:50.978 --rc genhtml_legend=1 00:27:50.978 --rc geninfo_all_blocks=1 00:27:50.978 --rc geninfo_unexecuted_blocks=1 00:27:50.978 00:27:50.978 ' 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:50.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.978 --rc genhtml_branch_coverage=1 00:27:50.978 --rc genhtml_function_coverage=1 00:27:50.978 --rc genhtml_legend=1 00:27:50.978 --rc geninfo_all_blocks=1 00:27:50.978 --rc geninfo_unexecuted_blocks=1 00:27:50.978 00:27:50.978 ' 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:50.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:50.978 --rc genhtml_branch_coverage=1 00:27:50.978 --rc genhtml_function_coverage=1 00:27:50.978 --rc genhtml_legend=1 00:27:50.978 --rc geninfo_all_blocks=1 00:27:50.978 --rc geninfo_unexecuted_blocks=1 00:27:50.978 00:27:50.978 ' 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.978 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:50.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:27:50.979 14:38:14 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.121 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:59.122 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:59.122 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:59.122 Found net devices under 0000:31:00.0: cvl_0_0 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:59.122 Found net devices under 0000:31:00.1: cvl_0_1 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # is_hw=yes 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:59.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:27:59.122 00:27:59.122 --- 10.0.0.2 ping statistics --- 00:27:59.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.122 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:27:59.122 00:27:59.122 --- 10.0.0.1 ping statistics --- 00:27:59.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.122 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # return 0 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3099250 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3099250 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3099250 ']' 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:59.122 14:38:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:59.382 Malloc0 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:59.382 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.383 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:27:59.383 14:38:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:28:31.487 Fuzzing completed. Shutting down the fuzz application 00:28:31.487 00:28:31.487 Dumping successful admin opcodes: 00:28:31.487 8, 9, 10, 24, 00:28:31.487 Dumping successful io opcodes: 00:28:31.487 0, 9, 00:28:31.487 NS: 0x200003aefec0 I/O qp, Total commands completed: 811953, total successful commands: 4718, random_seed: 2783507584 00:28:31.487 NS: 0x200003aefec0 admin qp, Total commands completed: 101850, total successful commands: 839, random_seed: 3728196928 00:28:31.487 14:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:28:32.059 Fuzzing completed. Shutting down the fuzz application 00:28:32.059 00:28:32.059 Dumping successful admin opcodes: 00:28:32.059 24, 00:28:32.059 Dumping successful io opcodes: 00:28:32.059 00:28:32.059 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 79324333 00:28:32.059 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 79425247 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:32.059 rmmod nvme_tcp 00:28:32.059 rmmod nvme_fabrics 00:28:32.059 rmmod nvme_keyring 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@515 -- # '[' -n 3099250 ']' 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # killprocess 3099250 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3099250 ']' 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 3099250 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3099250 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3099250' 00:28:32.059 killing process with pid 3099250 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 3099250 00:28:32.059 14:38:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 3099250 00:28:33.000 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:33.000 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:33.000 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:33.000 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:28:33.000 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-save 00:28:33.000 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:33.000 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-restore 00:28:33.000 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:33.000 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:33.000 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.000 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.000 14:38:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.566 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:35.566 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:28:35.566 00:28:35.566 real 0m44.358s 00:28:35.566 user 0m59.283s 00:28:35.566 sys 0m15.425s 00:28:35.566 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:35.566 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:28:35.566 ************************************ 00:28:35.566 END TEST nvmf_fuzz 00:28:35.566 ************************************ 00:28:35.566 14:38:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:28:35.566 14:38:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:35.566 14:38:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:35.566 14:38:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:35.566 ************************************ 00:28:35.566 START TEST nvmf_multiconnection 00:28:35.566 ************************************ 00:28:35.566 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:28:35.566 * Looking for test storage... 00:28:35.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:35.566 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:35.566 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:28:35.566 14:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:35.566 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:35.566 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:35.566 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:35.566 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:35.566 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:28:35.566 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:28:35.566 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:28:35.566 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:35.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.567 --rc genhtml_branch_coverage=1 00:28:35.567 --rc genhtml_function_coverage=1 00:28:35.567 --rc genhtml_legend=1 00:28:35.567 --rc geninfo_all_blocks=1 00:28:35.567 --rc geninfo_unexecuted_blocks=1 00:28:35.567 00:28:35.567 ' 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:35.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.567 --rc genhtml_branch_coverage=1 00:28:35.567 --rc genhtml_function_coverage=1 00:28:35.567 --rc genhtml_legend=1 00:28:35.567 --rc geninfo_all_blocks=1 00:28:35.567 --rc geninfo_unexecuted_blocks=1 00:28:35.567 00:28:35.567 ' 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:35.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.567 --rc genhtml_branch_coverage=1 00:28:35.567 --rc genhtml_function_coverage=1 00:28:35.567 --rc genhtml_legend=1 00:28:35.567 --rc geninfo_all_blocks=1 00:28:35.567 --rc geninfo_unexecuted_blocks=1 00:28:35.567 00:28:35.567 ' 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:35.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.567 --rc genhtml_branch_coverage=1 00:28:35.567 --rc genhtml_function_coverage=1 00:28:35.567 --rc genhtml_legend=1 00:28:35.567 --rc geninfo_all_blocks=1 00:28:35.567 --rc geninfo_unexecuted_blocks=1 00:28:35.567 00:28:35.567 ' 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:35.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:28:35.567 14:38:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:42.159 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:42.159 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.159 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:42.160 Found net devices under 0000:31:00.0: cvl_0_0 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:42.160 Found net devices under 0000:31:00.1: cvl_0_1 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # is_hw=yes 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:42.160 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.422 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.422 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.422 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:42.422 14:39:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:42.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:28:42.422 00:28:42.422 --- 10.0.0.2 ping statistics --- 00:28:42.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.422 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:42.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:28:42.422 00:28:42.422 --- 10.0.0.1 ping statistics --- 00:28:42.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.422 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # return 0 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # nvmfpid=3110199 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # waitforlisten 3110199 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 3110199 ']' 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:42.422 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:42.683 [2024-10-07 14:39:06.218465] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:28:42.683 [2024-10-07 14:39:06.218587] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.683 [2024-10-07 14:39:06.365468] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:42.944 [2024-10-07 14:39:06.555230] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.944 [2024-10-07 14:39:06.555279] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.944 [2024-10-07 14:39:06.555294] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.944 [2024-10-07 14:39:06.555306] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.944 [2024-10-07 14:39:06.555315] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.944 [2024-10-07 14:39:06.557584] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.944 [2024-10-07 14:39:06.557669] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.944 [2024-10-07 14:39:06.557786] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.945 [2024-10-07 14:39:06.557808] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:28:43.517 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:43.517 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:28:43.517 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:43.517 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:43.517 14:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.517 [2024-10-07 14:39:07.029502] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.517 Malloc1 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.517 [2024-10-07 14:39:07.135676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.517 Malloc2 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.517 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.778 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.778 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:43.778 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:28:43.778 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.778 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.778 Malloc3 00:28:43.778 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.779 Malloc4 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:43.779 Malloc5 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.779 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.041 Malloc6 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.041 Malloc7 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.041 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.303 Malloc8 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.303 Malloc9 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.303 Malloc10 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:28:44.303 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.304 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.304 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.304 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:28:44.304 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.304 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.304 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.304 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:28:44.304 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.304 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.304 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.304 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:44.304 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:28:44.304 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.304 14:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.564 Malloc11 00:28:44.564 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.564 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:28:44.564 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.564 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.564 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.564 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:28:44.564 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.564 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.564 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.564 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:28:44.564 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.564 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:44.564 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.564 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:28:44.564 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:44.564 14:39:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:46.477 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:28:46.478 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:46.478 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:46.478 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:46.478 14:39:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:48.410 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:48.410 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:48.410 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:28:48.410 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:48.410 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:48.410 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:48.410 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:48.410 14:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:28:49.801 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:28:49.801 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:49.801 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:49.801 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:49.801 14:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:51.715 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:51.715 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:51.715 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:28:51.715 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:51.715 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:51.715 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:51.715 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:51.715 14:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:28:53.629 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:28:53.629 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:53.629 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:53.629 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:53.629 14:39:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:55.630 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:55.630 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:55.630 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:28:55.630 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:55.630 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:55.630 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:55.630 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:55.630 14:39:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:28:57.122 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:28:57.122 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:28:57.122 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:28:57.122 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:28:57.122 14:39:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:28:59.037 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:28:59.037 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:28:59.037 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:28:59.037 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:28:59.037 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:28:59.037 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:28:59.037 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:59.037 14:39:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:29:00.951 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:29:00.951 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:29:00.951 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:29:00.951 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:29:00.951 14:39:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:29:02.867 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:29:02.867 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:29:02.868 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:29:02.868 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:29:02.868 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:29:02.868 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:29:02.868 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:02.868 14:39:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:29:04.782 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:29:04.782 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:29:04.782 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:29:04.782 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:29:04.782 14:39:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:29:06.695 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:29:06.695 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:29:06.695 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:29:06.695 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:29:06.695 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:29:06.695 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:29:06.695 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:06.695 14:39:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:29:08.607 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:29:08.607 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:29:08.607 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:29:08.607 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:29:08.607 14:39:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:29:10.520 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:29:10.520 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:29:10.520 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:29:10.520 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:29:10.520 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:29:10.520 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:29:10.520 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:10.520 14:39:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:29:12.432 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:29:12.432 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:29:12.432 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:29:12.432 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:29:12.432 14:39:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:29:14.343 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:29:14.343 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:29:14.343 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:29:14.343 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:29:14.343 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:29:14.343 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:29:14.343 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:14.343 14:39:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:29:16.253 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:29:16.253 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:29:16.253 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:29:16.253 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:29:16.253 14:39:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:29:18.166 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:29:18.166 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:29:18.166 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:29:18.166 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:29:18.166 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:29:18.166 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:29:18.166 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:18.166 14:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:29:20.078 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:29:20.078 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:29:20.078 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:29:20.078 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:29:20.078 14:39:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:29:21.992 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:29:21.992 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:29:21.992 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:29:21.992 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:29:21.992 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:29:21.992 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:29:21.992 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:21.992 14:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:29:23.907 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:29:23.907 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:29:23.907 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:29:23.907 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:29:23.907 14:39:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:29:25.820 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:29:25.820 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:29:25.820 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:29:25.820 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:29:25.820 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:29:25.820 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:29:25.820 14:39:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:29:25.820 [global] 00:29:25.820 thread=1 00:29:25.820 invalidate=1 00:29:25.820 rw=read 00:29:25.821 time_based=1 00:29:25.821 runtime=10 00:29:25.821 ioengine=libaio 00:29:25.821 direct=1 00:29:25.821 bs=262144 00:29:25.821 iodepth=64 00:29:25.821 norandommap=1 00:29:25.821 numjobs=1 00:29:25.821 00:29:25.821 [job0] 00:29:25.821 filename=/dev/nvme0n1 00:29:25.821 [job1] 00:29:25.821 filename=/dev/nvme10n1 00:29:25.821 [job2] 00:29:25.821 filename=/dev/nvme1n1 00:29:25.821 [job3] 00:29:25.821 filename=/dev/nvme2n1 00:29:25.821 [job4] 00:29:25.821 filename=/dev/nvme3n1 00:29:25.821 [job5] 00:29:25.821 filename=/dev/nvme4n1 00:29:25.821 [job6] 00:29:25.821 filename=/dev/nvme5n1 00:29:25.821 [job7] 00:29:25.821 filename=/dev/nvme6n1 00:29:25.821 [job8] 00:29:25.821 filename=/dev/nvme7n1 00:29:25.821 [job9] 00:29:25.821 filename=/dev/nvme8n1 00:29:25.821 [job10] 00:29:25.821 filename=/dev/nvme9n1 00:29:26.082 Could not set queue depth (nvme0n1) 00:29:26.083 Could not set queue depth (nvme10n1) 00:29:26.083 Could not set queue depth (nvme1n1) 00:29:26.083 Could not set queue depth (nvme2n1) 00:29:26.083 Could not set queue depth (nvme3n1) 00:29:26.083 Could not set queue depth (nvme4n1) 00:29:26.083 Could not set queue depth (nvme5n1) 00:29:26.083 Could not set queue depth (nvme6n1) 00:29:26.083 Could not set queue depth (nvme7n1) 00:29:26.083 Could not set queue depth (nvme8n1) 00:29:26.083 Could not set queue depth (nvme9n1) 00:29:26.344 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:26.344 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:26.344 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:26.344 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:26.344 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:26.344 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:26.344 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:26.344 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:26.344 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:26.344 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:26.344 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:26.344 fio-3.35 00:29:26.344 Starting 11 threads 00:29:38.599 00:29:38.599 job0: (groupid=0, jobs=1): err= 0: pid=3119375: Mon Oct 7 14:40:00 2024 00:29:38.599 read: IOPS=112, BW=28.2MiB/s (29.6MB/s)(287MiB/10181msec) 00:29:38.599 slat (usec): min=15, max=299525, avg=8711.54, stdev=28422.08 00:29:38.599 clat (msec): min=17, max=1232, avg=557.66, stdev=298.20 00:29:38.599 lat (msec): min=17, max=1232, avg=566.37, stdev=302.99 00:29:38.599 clat percentiles (msec): 00:29:38.599 | 1.00th=[ 25], 5.00th=[ 113], 10.00th=[ 157], 20.00th=[ 182], 00:29:38.599 | 30.00th=[ 317], 40.00th=[ 527], 50.00th=[ 617], 60.00th=[ 726], 00:29:38.599 | 70.00th=[ 785], 80.00th=[ 835], 90.00th=[ 919], 95.00th=[ 936], 00:29:38.599 | 99.00th=[ 1011], 99.50th=[ 1099], 99.90th=[ 1133], 99.95th=[ 1234], 00:29:38.599 | 99.99th=[ 1234] 00:29:38.599 bw ( KiB/s): min= 6144, max=100352, per=2.77%, avg=27776.00, stdev=21590.36, samples=20 00:29:38.599 iops : min= 24, max= 392, avg=108.50, stdev=84.34, samples=20 00:29:38.599 lat (msec) : 20=0.44%, 50=0.87%, 100=3.39%, 250=22.98%, 500=8.96% 00:29:38.599 lat (msec) : 750=26.98%, 1000=35.16%, 2000=1.22% 00:29:38.599 cpu : usr=0.01%, sys=0.61%, ctx=195, majf=0, minf=4097 00:29:38.599 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:29:38.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.599 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:38.599 issued rwts: total=1149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.599 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:38.599 job1: (groupid=0, jobs=1): err= 0: pid=3119401: Mon Oct 7 14:40:00 2024 00:29:38.599 read: IOPS=1306, BW=327MiB/s (342MB/s)(3276MiB/10033msec) 00:29:38.599 slat (usec): min=10, max=44716, avg=759.95, stdev=2334.12 00:29:38.599 clat (msec): min=17, max=165, avg=48.17, stdev=22.58 00:29:38.599 lat (msec): min=19, max=168, avg=48.93, stdev=22.93 00:29:38.599 clat percentiles (msec): 00:29:38.599 | 1.00th=[ 31], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 37], 00:29:38.599 | 30.00th=[ 39], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 42], 00:29:38.599 | 70.00th=[ 44], 80.00th=[ 48], 90.00th=[ 77], 95.00th=[ 110], 00:29:38.599 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 161], 00:29:38.599 | 99.99th=[ 165] 00:29:38.599 bw ( KiB/s): min=124928, max=430592, per=33.33%, avg=333849.60, stdev=107894.11, samples=20 00:29:38.599 iops : min= 488, max= 1682, avg=1304.10, stdev=421.46, samples=20 00:29:38.599 lat (msec) : 20=0.02%, 50=83.23%, 100=10.00%, 250=6.75% 00:29:38.599 cpu : usr=0.46%, sys=4.37%, ctx=1610, majf=0, minf=4097 00:29:38.599 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:29:38.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:38.599 issued rwts: total=13104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.599 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:38.599 job2: (groupid=0, jobs=1): err= 0: pid=3119432: Mon Oct 7 14:40:00 2024 00:29:38.599 read: IOPS=221, BW=55.4MiB/s (58.1MB/s)(564MiB/10190msec) 00:29:38.599 slat (usec): min=12, max=538857, avg=4377.09, stdev=20595.63 00:29:38.599 clat (msec): min=14, max=1317, avg=284.07, stdev=203.34 00:29:38.599 lat (msec): min=15, max=1368, avg=288.45, stdev=206.09 00:29:38.599 clat percentiles (msec): 00:29:38.599 | 1.00th=[ 61], 5.00th=[ 94], 10.00th=[ 115], 20.00th=[ 144], 00:29:38.599 | 30.00th=[ 163], 40.00th=[ 218], 50.00th=[ 245], 60.00th=[ 268], 00:29:38.599 | 70.00th=[ 296], 80.00th=[ 363], 90.00th=[ 451], 95.00th=[ 827], 00:29:38.599 | 99.00th=[ 1062], 99.50th=[ 1083], 99.90th=[ 1150], 99.95th=[ 1318], 00:29:38.599 | 99.99th=[ 1318] 00:29:38.599 bw ( KiB/s): min=13824, max=150528, per=5.61%, avg=56140.80, stdev=34818.88, samples=20 00:29:38.599 iops : min= 54, max= 588, avg=219.30, stdev=136.01, samples=20 00:29:38.599 lat (msec) : 20=0.35%, 50=0.35%, 100=6.51%, 250=44.97%, 500=39.34% 00:29:38.599 lat (msec) : 750=1.91%, 1000=4.56%, 2000=1.99% 00:29:38.599 cpu : usr=0.05%, sys=0.92%, ctx=399, majf=0, minf=4097 00:29:38.599 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:29:38.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:38.599 issued rwts: total=2257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.599 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:38.599 job3: (groupid=0, jobs=1): err= 0: pid=3119444: Mon Oct 7 14:40:00 2024 00:29:38.599 read: IOPS=489, BW=122MiB/s (128MB/s)(1229MiB/10047msec) 00:29:38.599 slat (usec): min=10, max=106987, avg=1816.61, stdev=5476.35 00:29:38.599 clat (msec): min=17, max=1014, avg=128.80, stdev=78.11 00:29:38.599 lat (msec): min=21, max=1014, avg=130.62, stdev=78.46 00:29:38.599 clat percentiles (msec): 00:29:38.599 | 1.00th=[ 41], 5.00th=[ 51], 10.00th=[ 72], 20.00th=[ 95], 00:29:38.599 | 30.00th=[ 106], 40.00th=[ 116], 50.00th=[ 125], 60.00th=[ 133], 00:29:38.600 | 70.00th=[ 138], 80.00th=[ 144], 90.00th=[ 157], 95.00th=[ 197], 00:29:38.600 | 99.00th=[ 326], 99.50th=[ 835], 99.90th=[ 995], 99.95th=[ 1011], 00:29:38.600 | 99.99th=[ 1011] 00:29:38.600 bw ( KiB/s): min=51302, max=176640, per=12.40%, avg=124216.30, stdev=30889.31, samples=20 00:29:38.600 iops : min= 200, max= 690, avg=485.20, stdev=120.71, samples=20 00:29:38.600 lat (msec) : 20=0.02%, 50=4.78%, 100=20.55%, 250=70.60%, 500=3.38% 00:29:38.600 lat (msec) : 1000=0.59%, 2000=0.08% 00:29:38.600 cpu : usr=0.20%, sys=1.86%, ctx=911, majf=0, minf=4097 00:29:38.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:29:38.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:38.600 issued rwts: total=4915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:38.600 job4: (groupid=0, jobs=1): err= 0: pid=3119451: Mon Oct 7 14:40:00 2024 00:29:38.600 read: IOPS=300, BW=75.1MiB/s (78.8MB/s)(755MiB/10042msec) 00:29:38.600 slat (usec): min=11, max=465668, avg=2709.13, stdev=18657.76 00:29:38.600 clat (msec): min=11, max=1199, avg=209.99, stdev=242.36 00:29:38.600 lat (msec): min=11, max=1328, avg=212.70, stdev=245.50 00:29:38.600 clat percentiles (msec): 00:29:38.600 | 1.00th=[ 17], 5.00th=[ 24], 10.00th=[ 32], 20.00th=[ 48], 00:29:38.600 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 64], 60.00th=[ 222], 00:29:38.600 | 70.00th=[ 257], 80.00th=[ 317], 90.00th=[ 460], 95.00th=[ 818], 00:29:38.600 | 99.00th=[ 1011], 99.50th=[ 1167], 99.90th=[ 1167], 99.95th=[ 1200], 00:29:38.600 | 99.99th=[ 1200] 00:29:38.600 bw ( KiB/s): min= 8192, max=343552, per=7.55%, avg=75648.00, stdev=89843.33, samples=20 00:29:38.600 iops : min= 32, max= 1342, avg=295.50, stdev=350.95, samples=20 00:29:38.600 lat (msec) : 20=3.25%, 50=21.40%, 100=27.80%, 250=16.30%, 500=21.84% 00:29:38.600 lat (msec) : 750=2.15%, 1000=5.96%, 2000=1.29% 00:29:38.600 cpu : usr=0.07%, sys=1.18%, ctx=481, majf=0, minf=4097 00:29:38.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:29:38.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:38.600 issued rwts: total=3018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:38.600 job5: (groupid=0, jobs=1): err= 0: pid=3119475: Mon Oct 7 14:40:00 2024 00:29:38.600 read: IOPS=282, BW=70.7MiB/s (74.1MB/s)(720MiB/10185msec) 00:29:38.600 slat (usec): min=9, max=635550, avg=2765.27, stdev=20577.42 00:29:38.600 clat (msec): min=8, max=1537, avg=223.39, stdev=253.44 00:29:38.600 lat (msec): min=8, max=1537, avg=226.15, stdev=257.04 00:29:38.600 clat percentiles (msec): 00:29:38.600 | 1.00th=[ 13], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 63], 00:29:38.600 | 30.00th=[ 75], 40.00th=[ 99], 50.00th=[ 126], 60.00th=[ 174], 00:29:38.600 | 70.00th=[ 215], 80.00th=[ 330], 90.00th=[ 676], 95.00th=[ 885], 00:29:38.600 | 99.00th=[ 1070], 99.50th=[ 1083], 99.90th=[ 1234], 99.95th=[ 1250], 00:29:38.600 | 99.99th=[ 1536] 00:29:38.600 bw ( KiB/s): min= 6656, max=250880, per=7.57%, avg=75856.84, stdev=71151.68, samples=19 00:29:38.600 iops : min= 26, max= 980, avg=296.32, stdev=277.94, samples=19 00:29:38.600 lat (msec) : 10=0.14%, 20=3.72%, 50=11.84%, 100=25.36%, 250=32.96% 00:29:38.600 lat (msec) : 500=15.04%, 750=1.98%, 1000=7.19%, 2000=1.77% 00:29:38.600 cpu : usr=0.10%, sys=0.96%, ctx=518, majf=0, minf=4097 00:29:38.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:29:38.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:38.600 issued rwts: total=2879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:38.600 job6: (groupid=0, jobs=1): err= 0: pid=3119485: Mon Oct 7 14:40:00 2024 00:29:38.600 read: IOPS=519, BW=130MiB/s (136MB/s)(1307MiB/10051msec) 00:29:38.600 slat (usec): min=12, max=127136, avg=1911.49, stdev=5732.45 00:29:38.600 clat (msec): min=14, max=387, avg=120.97, stdev=41.51 00:29:38.600 lat (msec): min=15, max=387, avg=122.88, stdev=42.07 00:29:38.600 clat percentiles (msec): 00:29:38.600 | 1.00th=[ 71], 5.00th=[ 82], 10.00th=[ 87], 20.00th=[ 94], 00:29:38.600 | 30.00th=[ 103], 40.00th=[ 110], 50.00th=[ 117], 60.00th=[ 123], 00:29:38.600 | 70.00th=[ 128], 80.00th=[ 136], 90.00th=[ 146], 95.00th=[ 165], 00:29:38.600 | 99.00th=[ 313], 99.50th=[ 334], 99.90th=[ 338], 99.95th=[ 355], 00:29:38.600 | 99.99th=[ 388] 00:29:38.600 bw ( KiB/s): min=50688, max=182784, per=13.20%, avg=132178.35, stdev=33696.31, samples=20 00:29:38.600 iops : min= 198, max= 714, avg=516.30, stdev=131.68, samples=20 00:29:38.600 lat (msec) : 20=0.19%, 50=0.17%, 100=28.01%, 250=67.95%, 500=3.67% 00:29:38.600 cpu : usr=0.22%, sys=1.92%, ctx=921, majf=0, minf=4097 00:29:38.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:29:38.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:38.600 issued rwts: total=5226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:38.600 job7: (groupid=0, jobs=1): err= 0: pid=3119486: Mon Oct 7 14:40:00 2024 00:29:38.600 read: IOPS=168, BW=42.2MiB/s (44.3MB/s)(430MiB/10191msec) 00:29:38.600 slat (usec): min=8, max=706108, avg=5340.07, stdev=28146.81 00:29:38.600 clat (msec): min=9, max=1535, avg=373.04, stdev=330.26 00:29:38.600 lat (msec): min=9, max=1535, avg=378.38, stdev=335.15 00:29:38.600 clat percentiles (msec): 00:29:38.600 | 1.00th=[ 20], 5.00th=[ 47], 10.00th=[ 68], 20.00th=[ 130], 00:29:38.600 | 30.00th=[ 144], 40.00th=[ 157], 50.00th=[ 171], 60.00th=[ 264], 00:29:38.600 | 70.00th=[ 558], 80.00th=[ 785], 90.00th=[ 869], 95.00th=[ 978], 00:29:38.600 | 99.00th=[ 1150], 99.50th=[ 1167], 99.90th=[ 1250], 99.95th=[ 1536], 00:29:38.600 | 99.99th=[ 1536] 00:29:38.600 bw ( KiB/s): min= 2048, max=114688, per=4.24%, avg=42419.20, stdev=37275.74, samples=20 00:29:38.600 iops : min= 8, max= 448, avg=165.70, stdev=145.61, samples=20 00:29:38.600 lat (msec) : 10=0.12%, 20=0.93%, 50=5.40%, 100=7.61%, 250=45.32% 00:29:38.600 lat (msec) : 500=6.33%, 750=12.43%, 1000=18.30%, 2000=3.54% 00:29:38.600 cpu : usr=0.06%, sys=0.56%, ctx=329, majf=0, minf=3534 00:29:38.600 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:29:38.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.600 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:38.600 issued rwts: total=1721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:38.600 job8: (groupid=0, jobs=1): err= 0: pid=3119514: Mon Oct 7 14:40:00 2024 00:29:38.600 read: IOPS=110, BW=27.5MiB/s (28.9MB/s)(281MiB/10183msec) 00:29:38.600 slat (usec): min=13, max=303978, avg=6900.79, stdev=26439.77 00:29:38.600 clat (msec): min=16, max=1129, avg=573.02, stdev=294.48 00:29:38.600 lat (msec): min=20, max=1164, avg=579.92, stdev=298.37 00:29:38.600 clat percentiles (msec): 00:29:38.600 | 1.00th=[ 36], 5.00th=[ 140], 10.00th=[ 161], 20.00th=[ 184], 00:29:38.600 | 30.00th=[ 313], 40.00th=[ 558], 50.00th=[ 676], 60.00th=[ 743], 00:29:38.600 | 70.00th=[ 776], 80.00th=[ 827], 90.00th=[ 927], 95.00th=[ 961], 00:29:38.600 | 99.00th=[ 1028], 99.50th=[ 1133], 99.90th=[ 1133], 99.95th=[ 1133], 00:29:38.600 | 99.99th=[ 1133] 00:29:38.600 bw ( KiB/s): min=13312, max=92160, per=2.70%, avg=27084.80, stdev=19185.28, samples=20 00:29:38.600 iops : min= 52, max= 360, avg=105.80, stdev=74.94, samples=20 00:29:38.600 lat (msec) : 20=0.09%, 50=1.96%, 100=0.53%, 250=23.53%, 500=9.54% 00:29:38.600 lat (msec) : 750=26.92%, 1000=36.36%, 2000=1.07% 00:29:38.600 cpu : usr=0.01%, sys=0.51%, ctx=209, majf=0, minf=4097 00:29:38.600 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.4% 00:29:38.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.600 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:38.600 issued rwts: total=1122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:38.600 job9: (groupid=0, jobs=1): err= 0: pid=3119527: Mon Oct 7 14:40:00 2024 00:29:38.600 read: IOPS=271, BW=67.9MiB/s (71.2MB/s)(691MiB/10170msec) 00:29:38.600 slat (usec): min=5, max=771671, avg=3311.10, stdev=23616.91 00:29:38.600 clat (msec): min=21, max=1488, avg=231.92, stdev=268.41 00:29:38.600 lat (msec): min=23, max=1488, avg=235.23, stdev=271.80 00:29:38.600 clat percentiles (msec): 00:29:38.600 | 1.00th=[ 27], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 38], 00:29:38.600 | 30.00th=[ 58], 40.00th=[ 81], 50.00th=[ 176], 60.00th=[ 203], 00:29:38.600 | 70.00th=[ 232], 80.00th=[ 347], 90.00th=[ 701], 95.00th=[ 927], 00:29:38.600 | 99.00th=[ 1217], 99.50th=[ 1234], 99.90th=[ 1234], 99.95th=[ 1485], 00:29:38.600 | 99.99th=[ 1485] 00:29:38.600 bw ( KiB/s): min=12288, max=327168, per=7.27%, avg=72784.84, stdev=88107.53, samples=19 00:29:38.600 iops : min= 48, max= 1278, avg=284.32, stdev=344.17, samples=19 00:29:38.600 lat (msec) : 50=27.75%, 100=17.87%, 250=28.00%, 500=16.10%, 750=0.83% 00:29:38.600 lat (msec) : 1000=6.51%, 2000=2.93% 00:29:38.600 cpu : usr=0.06%, sys=0.87%, ctx=426, majf=0, minf=4097 00:29:38.600 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:29:38.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:38.600 issued rwts: total=2764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:38.600 job10: (groupid=0, jobs=1): err= 0: pid=3119537: Mon Oct 7 14:40:00 2024 00:29:38.600 read: IOPS=168, BW=42.1MiB/s (44.2MB/s)(429MiB/10173msec) 00:29:38.600 slat (usec): min=12, max=732678, avg=3594.30, stdev=28079.68 00:29:38.600 clat (msec): min=18, max=1420, avg=375.34, stdev=360.96 00:29:38.600 lat (msec): min=19, max=1890, avg=378.94, stdev=365.61 00:29:38.600 clat percentiles (msec): 00:29:38.600 | 1.00th=[ 25], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 50], 00:29:38.600 | 30.00th=[ 77], 40.00th=[ 105], 50.00th=[ 218], 60.00th=[ 334], 00:29:38.600 | 70.00th=[ 600], 80.00th=[ 793], 90.00th=[ 961], 95.00th=[ 1011], 00:29:38.600 | 99.00th=[ 1083], 99.50th=[ 1150], 99.90th=[ 1418], 99.95th=[ 1418], 00:29:38.600 | 99.99th=[ 1418] 00:29:38.600 bw ( KiB/s): min=12800, max=154112, per=4.44%, avg=44517.05, stdev=36638.71, samples=19 00:29:38.600 iops : min= 50, max= 602, avg=173.89, stdev=143.12, samples=19 00:29:38.600 lat (msec) : 20=0.17%, 50=20.17%, 100=19.13%, 250=12.94%, 500=12.30% 00:29:38.600 lat (msec) : 750=12.19%, 1000=17.20%, 2000=5.89% 00:29:38.600 cpu : usr=0.04%, sys=0.76%, ctx=383, majf=0, minf=4097 00:29:38.600 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:29:38.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.600 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:38.600 issued rwts: total=1715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.600 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:38.600 00:29:38.601 Run status group 0 (all jobs): 00:29:38.601 READ: bw=978MiB/s (1026MB/s), 27.5MiB/s-327MiB/s (28.9MB/s-342MB/s), io=9968MiB (10.5GB), run=10033-10191msec 00:29:38.601 00:29:38.601 Disk stats (read/write): 00:29:38.601 nvme0n1: ios=2181/0, merge=0/0, ticks=1188588/0, in_queue=1188588, util=95.97% 00:29:38.601 nvme10n1: ios=25712/0, merge=0/0, ticks=1219974/0, in_queue=1219974, util=96.30% 00:29:38.601 nvme1n1: ios=4389/0, merge=0/0, ticks=1185292/0, in_queue=1185292, util=97.02% 00:29:38.601 nvme2n1: ios=9452/0, merge=0/0, ticks=1219427/0, in_queue=1219427, util=97.12% 00:29:38.601 nvme3n1: ios=5425/0, merge=0/0, ticks=1221501/0, in_queue=1221501, util=97.23% 00:29:38.601 nvme4n1: ios=5634/0, merge=0/0, ticks=1219535/0, in_queue=1219535, util=97.76% 00:29:38.601 nvme5n1: ios=10057/0, merge=0/0, ticks=1219109/0, in_queue=1219109, util=97.95% 00:29:38.601 nvme6n1: ios=3318/0, merge=0/0, ticks=1180491/0, in_queue=1180491, util=98.25% 00:29:38.601 nvme7n1: ios=2124/0, merge=0/0, ticks=1194105/0, in_queue=1194105, util=98.76% 00:29:38.601 nvme8n1: ios=5417/0, merge=0/0, ticks=1181814/0, in_queue=1181814, util=98.94% 00:29:38.601 nvme9n1: ios=3366/0, merge=0/0, ticks=1208115/0, in_queue=1208115, util=99.21% 00:29:38.601 14:40:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:29:38.601 [global] 00:29:38.601 thread=1 00:29:38.601 invalidate=1 00:29:38.601 rw=randwrite 00:29:38.601 time_based=1 00:29:38.601 runtime=10 00:29:38.601 ioengine=libaio 00:29:38.601 direct=1 00:29:38.601 bs=262144 00:29:38.601 iodepth=64 00:29:38.601 norandommap=1 00:29:38.601 numjobs=1 00:29:38.601 00:29:38.601 [job0] 00:29:38.601 filename=/dev/nvme0n1 00:29:38.601 [job1] 00:29:38.601 filename=/dev/nvme10n1 00:29:38.601 [job2] 00:29:38.601 filename=/dev/nvme1n1 00:29:38.601 [job3] 00:29:38.601 filename=/dev/nvme2n1 00:29:38.601 [job4] 00:29:38.601 filename=/dev/nvme3n1 00:29:38.601 [job5] 00:29:38.601 filename=/dev/nvme4n1 00:29:38.601 [job6] 00:29:38.601 filename=/dev/nvme5n1 00:29:38.601 [job7] 00:29:38.601 filename=/dev/nvme6n1 00:29:38.601 [job8] 00:29:38.601 filename=/dev/nvme7n1 00:29:38.601 [job9] 00:29:38.601 filename=/dev/nvme8n1 00:29:38.601 [job10] 00:29:38.601 filename=/dev/nvme9n1 00:29:38.601 Could not set queue depth (nvme0n1) 00:29:38.601 Could not set queue depth (nvme10n1) 00:29:38.601 Could not set queue depth (nvme1n1) 00:29:38.601 Could not set queue depth (nvme2n1) 00:29:38.601 Could not set queue depth (nvme3n1) 00:29:38.601 Could not set queue depth (nvme4n1) 00:29:38.601 Could not set queue depth (nvme5n1) 00:29:38.601 Could not set queue depth (nvme6n1) 00:29:38.601 Could not set queue depth (nvme7n1) 00:29:38.601 Could not set queue depth (nvme8n1) 00:29:38.601 Could not set queue depth (nvme9n1) 00:29:38.601 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:38.601 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:38.601 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:38.601 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:38.601 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:38.601 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:38.601 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:38.601 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:38.601 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:38.601 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:38.601 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:29:38.601 fio-3.35 00:29:38.601 Starting 11 threads 00:29:48.601 00:29:48.601 job0: (groupid=0, jobs=1): err= 0: pid=3121037: Mon Oct 7 14:40:11 2024 00:29:48.601 write: IOPS=609, BW=152MiB/s (160MB/s)(1538MiB/10096msec); 0 zone resets 00:29:48.601 slat (usec): min=20, max=44344, avg=1537.55, stdev=3067.76 00:29:48.601 clat (msec): min=2, max=223, avg=103.44, stdev=37.70 00:29:48.601 lat (msec): min=2, max=223, avg=104.98, stdev=38.25 00:29:48.601 clat percentiles (msec): 00:29:48.601 | 1.00th=[ 11], 5.00th=[ 30], 10.00th=[ 56], 20.00th=[ 65], 00:29:48.601 | 30.00th=[ 84], 40.00th=[ 111], 50.00th=[ 115], 60.00th=[ 118], 00:29:48.601 | 70.00th=[ 124], 80.00th=[ 130], 90.00th=[ 142], 95.00th=[ 153], 00:29:48.601 | 99.00th=[ 194], 99.50th=[ 199], 99.90th=[ 209], 99.95th=[ 215], 00:29:48.601 | 99.99th=[ 224] 00:29:48.601 bw ( KiB/s): min=88576, max=274432, per=15.04%, avg=155904.00, stdev=52751.17, samples=20 00:29:48.601 iops : min= 346, max= 1072, avg=609.00, stdev=206.06, samples=20 00:29:48.601 lat (msec) : 4=0.16%, 10=0.81%, 20=2.15%, 50=5.15%, 100=24.90% 00:29:48.601 lat (msec) : 250=66.83% 00:29:48.601 cpu : usr=1.26%, sys=1.89%, ctx=2000, majf=0, minf=1 00:29:48.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:29:48.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:48.601 issued rwts: total=0,6153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:48.601 job1: (groupid=0, jobs=1): err= 0: pid=3121038: Mon Oct 7 14:40:11 2024 00:29:48.601 write: IOPS=614, BW=154MiB/s (161MB/s)(1565MiB/10184msec); 0 zone resets 00:29:48.601 slat (usec): min=15, max=74324, avg=1392.47, stdev=4110.01 00:29:48.601 clat (usec): min=1952, max=457551, avg=102667.98, stdev=95371.20 00:29:48.601 lat (msec): min=2, max=457, avg=104.06, stdev=96.70 00:29:48.601 clat percentiles (msec): 00:29:48.601 | 1.00th=[ 7], 5.00th=[ 24], 10.00th=[ 45], 20.00th=[ 51], 00:29:48.601 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 68], 60.00th=[ 71], 00:29:48.601 | 70.00th=[ 89], 80.00th=[ 122], 90.00th=[ 264], 95.00th=[ 326], 00:29:48.601 | 99.00th=[ 443], 99.50th=[ 447], 99.90th=[ 456], 99.95th=[ 456], 00:29:48.601 | 99.99th=[ 460] 00:29:48.601 bw ( KiB/s): min=36864, max=311808, per=15.31%, avg=158617.60, stdev=95252.66, samples=20 00:29:48.601 iops : min= 144, max= 1218, avg=619.60, stdev=372.08, samples=20 00:29:48.601 lat (msec) : 2=0.02%, 4=0.53%, 10=1.31%, 20=2.27%, 50=13.95% 00:29:48.601 lat (msec) : 100=56.77%, 250=13.40%, 500=11.76% 00:29:48.601 cpu : usr=1.17%, sys=1.97%, ctx=2414, majf=0, minf=1 00:29:48.601 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:29:48.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.601 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:48.601 issued rwts: total=0,6259,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.601 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:48.601 job2: (groupid=0, jobs=1): err= 0: pid=3121050: Mon Oct 7 14:40:11 2024 00:29:48.601 write: IOPS=292, BW=73.1MiB/s (76.7MB/s)(745MiB/10185msec); 0 zone resets 00:29:48.601 slat (usec): min=22, max=39439, avg=3355.73, stdev=6057.71 00:29:48.602 clat (msec): min=25, max=432, avg=215.28, stdev=49.35 00:29:48.602 lat (msec): min=25, max=432, avg=218.64, stdev=49.71 00:29:48.602 clat percentiles (msec): 00:29:48.602 | 1.00th=[ 102], 5.00th=[ 138], 10.00th=[ 171], 20.00th=[ 186], 00:29:48.602 | 30.00th=[ 192], 40.00th=[ 201], 50.00th=[ 209], 60.00th=[ 218], 00:29:48.602 | 70.00th=[ 226], 80.00th=[ 247], 90.00th=[ 292], 95.00th=[ 313], 00:29:48.602 | 99.00th=[ 330], 99.50th=[ 368], 99.90th=[ 418], 99.95th=[ 430], 00:29:48.602 | 99.99th=[ 435] 00:29:48.602 bw ( KiB/s): min=55296, max=102400, per=7.20%, avg=74649.60, stdev=12457.05, samples=20 00:29:48.602 iops : min= 216, max= 400, avg=291.60, stdev=48.66, samples=20 00:29:48.602 lat (msec) : 50=0.40%, 100=0.54%, 250=79.80%, 500=19.26% 00:29:48.602 cpu : usr=0.73%, sys=0.68%, ctx=733, majf=0, minf=1 00:29:48.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:29:48.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:48.602 issued rwts: total=0,2980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:48.602 job3: (groupid=0, jobs=1): err= 0: pid=3121051: Mon Oct 7 14:40:11 2024 00:29:48.602 write: IOPS=225, BW=56.4MiB/s (59.2MB/s)(575MiB/10183msec); 0 zone resets 00:29:48.602 slat (usec): min=24, max=62349, avg=4158.10, stdev=7868.40 00:29:48.602 clat (msec): min=3, max=442, avg=279.12, stdev=70.97 00:29:48.602 lat (msec): min=3, max=442, avg=283.28, stdev=71.72 00:29:48.602 clat percentiles (msec): 00:29:48.602 | 1.00th=[ 6], 5.00th=[ 159], 10.00th=[ 209], 20.00th=[ 247], 00:29:48.602 | 30.00th=[ 257], 40.00th=[ 266], 50.00th=[ 271], 60.00th=[ 279], 00:29:48.602 | 70.00th=[ 305], 80.00th=[ 342], 90.00th=[ 380], 95.00th=[ 393], 00:29:48.602 | 99.00th=[ 401], 99.50th=[ 405], 99.90th=[ 426], 99.95th=[ 443], 00:29:48.602 | 99.99th=[ 443] 00:29:48.602 bw ( KiB/s): min=40960, max=73728, per=5.52%, avg=57241.60, stdev=9870.35, samples=20 00:29:48.602 iops : min= 160, max= 288, avg=223.60, stdev=38.56, samples=20 00:29:48.602 lat (msec) : 4=0.04%, 10=1.44%, 50=0.17%, 100=0.61%, 250=20.92% 00:29:48.602 lat (msec) : 500=76.82% 00:29:48.602 cpu : usr=0.49%, sys=0.71%, ctx=652, majf=0, minf=1 00:29:48.602 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:29:48.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:48.602 issued rwts: total=0,2299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:48.602 job4: (groupid=0, jobs=1): err= 0: pid=3121052: Mon Oct 7 14:40:11 2024 00:29:48.602 write: IOPS=369, BW=92.3MiB/s (96.8MB/s)(940MiB/10186msec); 0 zone resets 00:29:48.602 slat (usec): min=28, max=168190, avg=2511.82, stdev=6192.54 00:29:48.602 clat (msec): min=10, max=433, avg=170.74, stdev=86.51 00:29:48.602 lat (msec): min=11, max=433, avg=173.25, stdev=87.72 00:29:48.602 clat percentiles (msec): 00:29:48.602 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 56], 20.00th=[ 58], 00:29:48.602 | 30.00th=[ 96], 40.00th=[ 186], 50.00th=[ 199], 60.00th=[ 209], 00:29:48.602 | 70.00th=[ 220], 80.00th=[ 239], 90.00th=[ 271], 95.00th=[ 300], 00:29:48.602 | 99.00th=[ 326], 99.50th=[ 355], 99.90th=[ 418], 99.95th=[ 435], 00:29:48.602 | 99.99th=[ 435] 00:29:48.602 bw ( KiB/s): min=57344, max=307200, per=9.13%, avg=94643.20, stdev=59407.63, samples=20 00:29:48.602 iops : min= 224, max= 1200, avg=369.70, stdev=232.06, samples=20 00:29:48.602 lat (msec) : 20=0.37%, 50=7.10%, 100=22.95%, 250=54.08%, 500=15.50% 00:29:48.602 cpu : usr=0.84%, sys=1.12%, ctx=1258, majf=0, minf=1 00:29:48.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:29:48.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:48.602 issued rwts: total=0,3761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:48.602 job5: (groupid=0, jobs=1): err= 0: pid=3121053: Mon Oct 7 14:40:11 2024 00:29:48.602 write: IOPS=424, BW=106MiB/s (111MB/s)(1071MiB/10097msec); 0 zone resets 00:29:48.602 slat (usec): min=22, max=287761, avg=1986.32, stdev=7757.20 00:29:48.602 clat (msec): min=2, max=643, avg=148.71, stdev=121.15 00:29:48.602 lat (msec): min=2, max=643, avg=150.70, stdev=122.53 00:29:48.602 clat percentiles (msec): 00:29:48.602 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 32], 20.00th=[ 58], 00:29:48.602 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 109], 60.00th=[ 138], 00:29:48.602 | 70.00th=[ 203], 80.00th=[ 264], 90.00th=[ 300], 95.00th=[ 397], 00:29:48.602 | 99.00th=[ 485], 99.50th=[ 617], 99.90th=[ 634], 99.95th=[ 642], 00:29:48.602 | 99.99th=[ 642] 00:29:48.602 bw ( KiB/s): min=36864, max=281088, per=10.43%, avg=108083.20, stdev=69994.82, samples=20 00:29:48.602 iops : min= 144, max= 1098, avg=422.20, stdev=273.42, samples=20 00:29:48.602 lat (msec) : 4=0.63%, 10=2.59%, 20=3.66%, 50=6.98%, 100=33.28% 00:29:48.602 lat (msec) : 250=27.98%, 500=23.94%, 750=0.93% 00:29:48.602 cpu : usr=0.91%, sys=1.42%, ctx=1821, majf=0, minf=2 00:29:48.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:29:48.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:48.602 issued rwts: total=0,4285,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:48.602 job6: (groupid=0, jobs=1): err= 0: pid=3121054: Mon Oct 7 14:40:11 2024 00:29:48.602 write: IOPS=255, BW=64.0MiB/s (67.1MB/s)(652MiB/10188msec); 0 zone resets 00:29:48.602 slat (usec): min=22, max=100232, avg=3666.13, stdev=7867.41 00:29:48.602 clat (msec): min=7, max=440, avg=246.33, stdev=111.85 00:29:48.602 lat (msec): min=7, max=440, avg=250.00, stdev=113.47 00:29:48.602 clat percentiles (msec): 00:29:48.602 | 1.00th=[ 9], 5.00th=[ 15], 10.00th=[ 44], 20.00th=[ 159], 00:29:48.602 | 30.00th=[ 243], 40.00th=[ 257], 50.00th=[ 266], 60.00th=[ 271], 00:29:48.602 | 70.00th=[ 296], 80.00th=[ 334], 90.00th=[ 388], 95.00th=[ 414], 00:29:48.602 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 439], 99.95th=[ 439], 00:29:48.602 | 99.99th=[ 439] 00:29:48.602 bw ( KiB/s): min=36864, max=154624, per=6.28%, avg=65105.10, stdev=29294.67, samples=20 00:29:48.602 iops : min= 144, max= 604, avg=254.30, stdev=114.45, samples=20 00:29:48.602 lat (msec) : 10=2.76%, 20=4.14%, 50=4.10%, 100=4.14%, 250=19.56% 00:29:48.602 lat (msec) : 500=65.29% 00:29:48.602 cpu : usr=0.61%, sys=0.80%, ctx=956, majf=0, minf=1 00:29:48.602 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:29:48.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:48.602 issued rwts: total=0,2607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:48.602 job7: (groupid=0, jobs=1): err= 0: pid=3121055: Mon Oct 7 14:40:11 2024 00:29:48.602 write: IOPS=247, BW=61.9MiB/s (64.9MB/s)(631MiB/10188msec); 0 zone resets 00:29:48.602 slat (usec): min=25, max=137654, avg=3467.37, stdev=7941.14 00:29:48.602 clat (msec): min=7, max=436, avg=254.75, stdev=91.02 00:29:48.602 lat (msec): min=7, max=436, avg=258.21, stdev=92.46 00:29:48.602 clat percentiles (msec): 00:29:48.602 | 1.00th=[ 16], 5.00th=[ 64], 10.00th=[ 116], 20.00th=[ 192], 00:29:48.602 | 30.00th=[ 245], 40.00th=[ 257], 50.00th=[ 266], 60.00th=[ 271], 00:29:48.602 | 70.00th=[ 284], 80.00th=[ 326], 90.00th=[ 380], 95.00th=[ 393], 00:29:48.602 | 99.00th=[ 401], 99.50th=[ 409], 99.90th=[ 422], 99.95th=[ 435], 00:29:48.602 | 99.99th=[ 435] 00:29:48.602 bw ( KiB/s): min=40960, max=120561, per=6.08%, avg=62988.05, stdev=19602.12, samples=20 00:29:48.602 iops : min= 160, max= 470, avg=246.00, stdev=76.43, samples=20 00:29:48.602 lat (msec) : 10=0.36%, 20=1.07%, 50=2.73%, 100=4.71%, 250=25.12% 00:29:48.602 lat (msec) : 500=66.01% 00:29:48.602 cpu : usr=0.61%, sys=1.01%, ctx=1053, majf=0, minf=1 00:29:48.602 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:29:48.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:48.602 issued rwts: total=0,2524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:48.602 job8: (groupid=0, jobs=1): err= 0: pid=3121060: Mon Oct 7 14:40:11 2024 00:29:48.602 write: IOPS=414, BW=104MiB/s (109MB/s)(1054MiB/10184msec); 0 zone resets 00:29:48.602 slat (usec): min=24, max=145282, avg=2223.46, stdev=5084.18 00:29:48.602 clat (msec): min=38, max=432, avg=152.23, stdev=74.90 00:29:48.602 lat (msec): min=38, max=432, avg=154.46, stdev=75.82 00:29:48.602 clat percentiles (msec): 00:29:48.602 | 1.00th=[ 57], 5.00th=[ 107], 10.00th=[ 109], 20.00th=[ 114], 00:29:48.602 | 30.00th=[ 116], 40.00th=[ 117], 50.00th=[ 122], 60.00th=[ 125], 00:29:48.602 | 70.00th=[ 138], 80.00th=[ 171], 90.00th=[ 300], 95.00th=[ 330], 00:29:48.602 | 99.00th=[ 405], 99.50th=[ 414], 99.90th=[ 426], 99.95th=[ 426], 00:29:48.602 | 99.99th=[ 435] 00:29:48.602 bw ( KiB/s): min=36864, max=143872, per=10.26%, avg=106316.80, stdev=37793.30, samples=20 00:29:48.602 iops : min= 144, max= 562, avg=415.30, stdev=147.63, samples=20 00:29:48.602 lat (msec) : 50=0.66%, 100=2.94%, 250=83.57%, 500=12.83% 00:29:48.602 cpu : usr=0.83%, sys=1.31%, ctx=1270, majf=0, minf=1 00:29:48.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:29:48.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:48.602 issued rwts: total=0,4217,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.602 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:48.602 job9: (groupid=0, jobs=1): err= 0: pid=3121067: Mon Oct 7 14:40:11 2024 00:29:48.602 write: IOPS=296, BW=74.1MiB/s (77.7MB/s)(754MiB/10185msec); 0 zone resets 00:29:48.602 slat (usec): min=26, max=37995, avg=3221.75, stdev=5944.34 00:29:48.602 clat (msec): min=25, max=434, avg=212.73, stdev=50.33 00:29:48.602 lat (msec): min=25, max=434, avg=215.95, stdev=50.80 00:29:48.602 clat percentiles (msec): 00:29:48.602 | 1.00th=[ 103], 5.00th=[ 132], 10.00th=[ 161], 20.00th=[ 180], 00:29:48.602 | 30.00th=[ 190], 40.00th=[ 201], 50.00th=[ 209], 60.00th=[ 215], 00:29:48.602 | 70.00th=[ 224], 80.00th=[ 247], 90.00th=[ 292], 95.00th=[ 309], 00:29:48.602 | 99.00th=[ 326], 99.50th=[ 372], 99.90th=[ 422], 99.95th=[ 435], 00:29:48.602 | 99.99th=[ 435] 00:29:48.602 bw ( KiB/s): min=57344, max=104448, per=7.30%, avg=75596.80, stdev=11731.13, samples=20 00:29:48.602 iops : min= 224, max= 408, avg=295.30, stdev=45.82, samples=20 00:29:48.603 lat (msec) : 50=0.40%, 100=0.50%, 250=79.28%, 500=19.82% 00:29:48.603 cpu : usr=0.70%, sys=0.74%, ctx=800, majf=0, minf=1 00:29:48.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:29:48.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:48.603 issued rwts: total=0,3017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.603 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:48.603 job10: (groupid=0, jobs=1): err= 0: pid=3121073: Mon Oct 7 14:40:11 2024 00:29:48.603 write: IOPS=308, BW=77.0MiB/s (80.8MB/s)(785MiB/10185msec); 0 zone resets 00:29:48.603 slat (usec): min=27, max=45755, avg=2938.56, stdev=5697.40 00:29:48.603 clat (msec): min=11, max=433, avg=204.64, stdev=55.94 00:29:48.603 lat (msec): min=11, max=433, avg=207.58, stdev=56.52 00:29:48.603 clat percentiles (msec): 00:29:48.603 | 1.00th=[ 66], 5.00th=[ 100], 10.00th=[ 131], 20.00th=[ 174], 00:29:48.603 | 30.00th=[ 188], 40.00th=[ 197], 50.00th=[ 205], 60.00th=[ 213], 00:29:48.603 | 70.00th=[ 220], 80.00th=[ 241], 90.00th=[ 284], 95.00th=[ 309], 00:29:48.603 | 99.00th=[ 326], 99.50th=[ 368], 99.90th=[ 418], 99.95th=[ 435], 00:29:48.603 | 99.99th=[ 435] 00:29:48.603 bw ( KiB/s): min=59392, max=124665, per=7.60%, avg=78706.85, stdev=15174.42, samples=20 00:29:48.603 iops : min= 232, max= 486, avg=307.40, stdev=59.12, samples=20 00:29:48.603 lat (msec) : 20=0.19%, 50=0.51%, 100=4.65%, 250=76.93%, 500=17.72% 00:29:48.603 cpu : usr=0.66%, sys=0.95%, ctx=986, majf=0, minf=1 00:29:48.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:29:48.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:29:48.603 issued rwts: total=0,3138,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.603 latency : target=0, window=0, percentile=100.00%, depth=64 00:29:48.603 00:29:48.603 Run status group 0 (all jobs): 00:29:48.603 WRITE: bw=1012MiB/s (1061MB/s), 56.4MiB/s-154MiB/s (59.2MB/s-161MB/s), io=10.1GiB (10.8GB), run=10096-10188msec 00:29:48.603 00:29:48.603 Disk stats (read/write): 00:29:48.603 nvme0n1: ios=49/12282, merge=0/0, ticks=87/1230258, in_queue=1230345, util=96.76% 00:29:48.603 nvme10n1: ios=44/12439, merge=0/0, ticks=703/1224521, in_queue=1225224, util=99.82% 00:29:48.603 nvme1n1: ios=24/5875, merge=0/0, ticks=330/1218983, in_queue=1219313, util=97.87% 00:29:48.603 nvme2n1: ios=42/4520, merge=0/0, ticks=1784/1220515, in_queue=1222299, util=99.87% 00:29:48.603 nvme3n1: ios=43/7439, merge=0/0, ticks=3025/1205261, in_queue=1208286, util=99.98% 00:29:48.603 nvme4n1: ios=44/8541, merge=0/0, ticks=2920/1191556, in_queue=1194476, util=99.88% 00:29:48.603 nvme5n1: ios=0/5129, merge=0/0, ticks=0/1220603, in_queue=1220603, util=97.92% 00:29:48.603 nvme6n1: ios=0/4963, merge=0/0, ticks=0/1224734, in_queue=1224734, util=98.09% 00:29:48.603 nvme7n1: ios=43/8350, merge=0/0, ticks=760/1220485, in_queue=1221245, util=99.89% 00:29:48.603 nvme8n1: ios=43/5950, merge=0/0, ticks=159/1220181, in_queue=1220340, util=99.94% 00:29:48.603 nvme9n1: ios=41/6191, merge=0/0, ticks=2166/1220051, in_queue=1222217, util=99.93% 00:29:48.603 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:29:48.603 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:29:48.603 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:48.603 14:40:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:48.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:48.863 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:29:48.863 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:48.863 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:48.863 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:29:48.863 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:48.863 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:29:48.863 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:48.863 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:48.863 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.863 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:48.863 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.863 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:48.863 14:40:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:29:49.434 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:29:49.434 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:29:49.434 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:49.434 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:49.434 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:29:49.434 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:49.434 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:29:49.434 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:49.434 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:49.434 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.434 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:49.434 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.434 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:49.434 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:29:49.695 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:29:49.695 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:29:49.695 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:49.695 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:49.695 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:29:49.695 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:29:49.695 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:49.955 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:49.955 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:49.955 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.955 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:49.955 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.955 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:49.955 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:29:50.216 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:29:50.216 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:29:50.216 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:50.216 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:50.216 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:29:50.216 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:50.216 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:29:50.216 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:50.216 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:29:50.216 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.216 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:50.216 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.216 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:50.216 14:40:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:29:50.787 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:29:50.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:29:50.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:50.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:50.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:29:50.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:50.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:29:50.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:50.787 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:29:50.788 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.788 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:50.788 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.788 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:50.788 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:29:51.048 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:29:51.048 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:29:51.048 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:51.048 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:51.048 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:29:51.048 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:51.048 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:29:51.048 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:51.048 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:29:51.048 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.048 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:51.048 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.048 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:51.048 14:40:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:29:51.308 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:29:51.308 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:29:51.570 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:51.570 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:51.570 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:29:51.570 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:51.570 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:29:51.570 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:51.570 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:29:51.570 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.570 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:51.570 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.570 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:51.570 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:29:51.831 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:29:51.831 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:29:51.831 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:51.831 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:51.831 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:29:51.831 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:29:51.831 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:51.831 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:51.831 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:29:51.831 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.831 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:51.831 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.831 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:51.831 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:29:52.093 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:29:52.093 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:29:52.093 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:52.093 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:52.093 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:29:52.093 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:29:52.093 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:52.093 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:52.093 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:29:52.093 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.093 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:52.093 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.093 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:52.093 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:29:52.355 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:29:52.355 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:29:52.355 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:52.355 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:52.355 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:29:52.355 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:52.355 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:29:52.355 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:52.355 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:29:52.355 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.355 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:52.355 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.355 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:29:52.355 14:40:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:29:52.616 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:29:52.616 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:29:52.616 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:29:52.616 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:29:52.616 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:29:52.616 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:29:52.616 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:29:52.616 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:29:52.616 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:29:52.616 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.616 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:52.616 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.616 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:29:52.616 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:29:52.616 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:29:52.616 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:52.616 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:29:52.616 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:52.617 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:29:52.617 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:52.617 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:52.617 rmmod nvme_tcp 00:29:52.617 rmmod nvme_fabrics 00:29:52.617 rmmod nvme_keyring 00:29:52.617 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:52.878 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:29:52.878 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:29:52.878 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@515 -- # '[' -n 3110199 ']' 00:29:52.878 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # killprocess 3110199 00:29:52.878 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 3110199 ']' 00:29:52.878 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 3110199 00:29:52.878 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:29:52.878 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:52.878 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3110199 00:29:52.878 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:52.878 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:52.878 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3110199' 00:29:52.878 killing process with pid 3110199 00:29:52.878 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 3110199 00:29:52.878 14:40:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 3110199 00:29:55.425 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:55.425 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:55.425 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:55.425 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:29:55.425 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-save 00:29:55.425 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:55.425 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-restore 00:29:55.425 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:55.425 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:55.425 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.425 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.425 14:40:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.341 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:57.341 00:29:57.341 real 1m21.837s 00:29:57.341 user 5m11.222s 00:29:57.341 sys 0m16.552s 00:29:57.341 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:57.341 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:29:57.341 ************************************ 00:29:57.341 END TEST nvmf_multiconnection 00:29:57.341 ************************************ 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:57.342 ************************************ 00:29:57.342 START TEST nvmf_initiator_timeout 00:29:57.342 ************************************ 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:29:57.342 * Looking for test storage... 00:29:57.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:57.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.342 --rc genhtml_branch_coverage=1 00:29:57.342 --rc genhtml_function_coverage=1 00:29:57.342 --rc genhtml_legend=1 00:29:57.342 --rc geninfo_all_blocks=1 00:29:57.342 --rc geninfo_unexecuted_blocks=1 00:29:57.342 00:29:57.342 ' 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:57.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.342 --rc genhtml_branch_coverage=1 00:29:57.342 --rc genhtml_function_coverage=1 00:29:57.342 --rc genhtml_legend=1 00:29:57.342 --rc geninfo_all_blocks=1 00:29:57.342 --rc geninfo_unexecuted_blocks=1 00:29:57.342 00:29:57.342 ' 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:57.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.342 --rc genhtml_branch_coverage=1 00:29:57.342 --rc genhtml_function_coverage=1 00:29:57.342 --rc genhtml_legend=1 00:29:57.342 --rc geninfo_all_blocks=1 00:29:57.342 --rc geninfo_unexecuted_blocks=1 00:29:57.342 00:29:57.342 ' 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:57.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.342 --rc genhtml_branch_coverage=1 00:29:57.342 --rc genhtml_function_coverage=1 00:29:57.342 --rc genhtml_legend=1 00:29:57.342 --rc geninfo_all_blocks=1 00:29:57.342 --rc geninfo_unexecuted_blocks=1 00:29:57.342 00:29:57.342 ' 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:57.342 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:57.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:29:57.343 14:40:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:05.491 14:40:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:05.491 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:05.491 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:05.491 Found net devices under 0000:31:00.0: cvl_0_0 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:05.491 Found net devices under 0000:31:00.1: cvl_0_1 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # is_hw=yes 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.491 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:05.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:30:05.492 00:30:05.492 --- 10.0.0.2 ping statistics --- 00:30:05.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.492 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:30:05.492 00:30:05.492 --- 10.0.0.1 ping statistics --- 00:30:05.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.492 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # return 0 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # nvmfpid=3128110 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # waitforlisten 3128110 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 3128110 ']' 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:05.492 14:40:28 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:05.492 [2024-10-07 14:40:28.447506] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:30:05.492 [2024-10-07 14:40:28.447637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.492 [2024-10-07 14:40:28.591168] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.492 [2024-10-07 14:40:28.775735] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.492 [2024-10-07 14:40:28.775785] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.492 [2024-10-07 14:40:28.775797] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.492 [2024-10-07 14:40:28.775810] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.492 [2024-10-07 14:40:28.775819] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.492 [2024-10-07 14:40:28.778301] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.492 [2024-10-07 14:40:28.778386] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.492 [2024-10-07 14:40:28.778528] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.492 [2024-10-07 14:40:28.778549] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:30:05.753 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:05.753 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:30:05.753 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:05.753 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:05.753 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:05.753 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.753 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:30:05.753 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:05.753 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.753 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:05.754 Malloc0 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:05.754 Delay0 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:05.754 [2024-10-07 14:40:29.335584] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:05.754 [2024-10-07 14:40:29.375912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.754 14:40:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:07.666 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:30:07.666 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:30:07.666 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:30:07.666 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:30:07.666 14:40:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:30:09.578 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:30:09.578 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:30:09.578 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:30:09.578 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:30:09.578 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:30:09.578 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:30:09.578 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3129125 00:30:09.578 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:30:09.578 14:40:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:30:09.578 [global] 00:30:09.578 thread=1 00:30:09.578 invalidate=1 00:30:09.578 rw=write 00:30:09.578 time_based=1 00:30:09.578 runtime=60 00:30:09.578 ioengine=libaio 00:30:09.578 direct=1 00:30:09.578 bs=4096 00:30:09.578 iodepth=1 00:30:09.578 norandommap=0 00:30:09.578 numjobs=1 00:30:09.578 00:30:09.578 verify_dump=1 00:30:09.578 verify_backlog=512 00:30:09.578 verify_state_save=0 00:30:09.578 do_verify=1 00:30:09.578 verify=crc32c-intel 00:30:09.578 [job0] 00:30:09.578 filename=/dev/nvme0n1 00:30:09.578 Could not set queue depth (nvme0n1) 00:30:09.578 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:09.578 fio-3.35 00:30:09.578 Starting 1 thread 00:30:12.878 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:30:12.878 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.878 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:12.878 true 00:30:12.878 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.878 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:30:12.878 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.878 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:12.878 true 00:30:12.878 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.878 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:30:12.878 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.878 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:12.878 true 00:30:12.878 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.878 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:30:12.878 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.878 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:12.878 true 00:30:12.878 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.878 14:40:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:30:15.422 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:30:15.422 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.422 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:15.422 true 00:30:15.422 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.422 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:30:15.422 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.422 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:15.422 true 00:30:15.422 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.422 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:30:15.422 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.422 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:15.422 true 00:30:15.422 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.422 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:30:15.422 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.422 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:15.422 true 00:30:15.422 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.422 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:30:15.422 14:40:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3129125 00:31:11.689 00:31:11.689 job0: (groupid=0, jobs=1): err= 0: pid=3129290: Mon Oct 7 14:41:33 2024 00:31:11.689 read: IOPS=11, BW=45.7KiB/s (46.8kB/s)(2744KiB/60003msec) 00:31:11.689 slat (nsec): min=6716, max=61058, avg=26558.04, stdev=4953.42 00:31:11.689 clat (usec): min=450, max=41995k, avg=86529.36, stdev=1602536.06 00:31:11.689 lat (usec): min=457, max=41995k, avg=86555.92, stdev=1602536.04 00:31:11.689 clat percentiles (usec): 00:31:11.689 | 1.00th=[ 578], 5.00th=[ 685], 10.00th=[ 734], 00:31:11.689 | 20.00th=[ 824], 30.00th=[ 898], 40.00th=[ 1270], 00:31:11.689 | 50.00th=[ 41681], 60.00th=[ 41681], 70.00th=[ 42206], 00:31:11.689 | 80.00th=[ 42206], 90.00th=[ 42206], 95.00th=[ 42206], 00:31:11.689 | 99.00th=[ 42206], 99.50th=[ 42730], 99.90th=[17112761], 00:31:11.689 | 99.95th=[17112761], 99.99th=[17112761] 00:31:11.689 write: IOPS=17, BW=68.3KiB/s (69.9kB/s)(4096KiB/60003msec); 0 zone resets 00:31:11.689 slat (nsec): min=9069, max=86885, avg=29984.14, stdev=11519.07 00:31:11.689 clat (usec): min=212, max=934, avg=567.82, stdev=104.50 00:31:11.689 lat (usec): min=221, max=968, avg=597.81, stdev=110.02 00:31:11.689 clat percentiles (usec): 00:31:11.689 | 1.00th=[ 262], 5.00th=[ 392], 10.00th=[ 437], 20.00th=[ 482], 00:31:11.689 | 30.00th=[ 519], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 594], 00:31:11.689 | 70.00th=[ 619], 80.00th=[ 660], 90.00th=[ 693], 95.00th=[ 725], 00:31:11.689 | 99.00th=[ 791], 99.50th=[ 824], 99.90th=[ 930], 99.95th=[ 938], 00:31:11.689 | 99.99th=[ 938] 00:31:11.689 bw ( KiB/s): min= 1984, max= 4096, per=100.00%, avg=2730.67, stdev=1184.14, samples=3 00:31:11.689 iops : min= 496, max= 1024, avg=682.67, stdev=296.04, samples=3 00:31:11.689 lat (usec) : 250=0.53%, 500=14.80%, 750=48.07%, 1000=10.53% 00:31:11.689 lat (msec) : 2=2.11%, 50=23.92%, >=2000=0.06% 00:31:11.689 cpu : usr=0.06%, sys=0.11%, ctx=1710, majf=0, minf=1 00:31:11.689 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:11.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.689 issued rwts: total=686,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:11.689 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:11.689 00:31:11.689 Run status group 0 (all jobs): 00:31:11.689 READ: bw=45.7KiB/s (46.8kB/s), 45.7KiB/s-45.7KiB/s (46.8kB/s-46.8kB/s), io=2744KiB (2810kB), run=60003-60003msec 00:31:11.689 WRITE: bw=68.3KiB/s (69.9kB/s), 68.3KiB/s-68.3KiB/s (69.9kB/s-69.9kB/s), io=4096KiB (4194kB), run=60003-60003msec 00:31:11.689 00:31:11.689 Disk stats (read/write): 00:31:11.689 nvme0n1: ios=782/1024, merge=0/0, ticks=17321/451, in_queue=17772, util=99.73% 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:11.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:31:11.689 nvmf hotplug test: fio successful as expected 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:31:11.689 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:11.690 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:11.690 rmmod nvme_tcp 00:31:11.690 rmmod nvme_fabrics 00:31:11.690 rmmod nvme_keyring 00:31:11.690 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:11.690 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:31:11.690 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:31:11.690 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@515 -- # '[' -n 3128110 ']' 00:31:11.690 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # killprocess 3128110 00:31:11.690 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 3128110 ']' 00:31:11.690 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 3128110 00:31:11.690 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:31:11.690 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:11.690 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3128110 00:31:11.690 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:11.690 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:11.690 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3128110' 00:31:11.690 killing process with pid 3128110 00:31:11.690 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 3128110 00:31:11.690 14:41:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 3128110 00:31:11.690 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:11.690 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:11.690 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:11.690 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:31:11.690 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-save 00:31:11.690 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:11.690 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-restore 00:31:11.690 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:11.690 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:11.690 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.690 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:11.690 14:41:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:13.605 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:13.605 00:31:13.605 real 1m16.133s 00:31:13.605 user 4m38.303s 00:31:13.605 sys 0m7.586s 00:31:13.605 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:13.605 14:41:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:13.605 ************************************ 00:31:13.605 END TEST nvmf_initiator_timeout 00:31:13.605 ************************************ 00:31:13.605 14:41:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:31:13.605 14:41:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:31:13.605 14:41:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:31:13.605 14:41:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:31:13.605 14:41:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:21.753 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:21.753 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:21.753 Found net devices under 0000:31:00.0: cvl_0_0 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:21.753 Found net devices under 0000:31:00.1: cvl_0_1 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:21.753 14:41:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:31:21.754 ************************************ 00:31:21.754 START TEST nvmf_perf_adq 00:31:21.754 ************************************ 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:31:21.754 * Looking for test storage... 00:31:21.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:21.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.754 --rc genhtml_branch_coverage=1 00:31:21.754 --rc genhtml_function_coverage=1 00:31:21.754 --rc genhtml_legend=1 00:31:21.754 --rc geninfo_all_blocks=1 00:31:21.754 --rc geninfo_unexecuted_blocks=1 00:31:21.754 00:31:21.754 ' 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:21.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.754 --rc genhtml_branch_coverage=1 00:31:21.754 --rc genhtml_function_coverage=1 00:31:21.754 --rc genhtml_legend=1 00:31:21.754 --rc geninfo_all_blocks=1 00:31:21.754 --rc geninfo_unexecuted_blocks=1 00:31:21.754 00:31:21.754 ' 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:21.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.754 --rc genhtml_branch_coverage=1 00:31:21.754 --rc genhtml_function_coverage=1 00:31:21.754 --rc genhtml_legend=1 00:31:21.754 --rc geninfo_all_blocks=1 00:31:21.754 --rc geninfo_unexecuted_blocks=1 00:31:21.754 00:31:21.754 ' 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:21.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.754 --rc genhtml_branch_coverage=1 00:31:21.754 --rc genhtml_function_coverage=1 00:31:21.754 --rc genhtml_legend=1 00:31:21.754 --rc geninfo_all_blocks=1 00:31:21.754 --rc geninfo_unexecuted_blocks=1 00:31:21.754 00:31:21.754 ' 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:21.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:31:21.754 14:41:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:28.565 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.565 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:28.566 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:28.566 Found net devices under 0000:31:00.0: cvl_0_0 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:28.566 Found net devices under 0000:31:00.1: cvl_0_1 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:31:28.566 14:41:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:31:29.508 14:41:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:31:31.421 14:41:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:31:36.710 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:31:36.710 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:36.710 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:36.711 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:36.711 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:36.711 Found net devices under 0000:31:00.0: cvl_0_0 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:36.711 Found net devices under 0000:31:00.1: cvl_0_1 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:36.711 14:41:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:36.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:36.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:31:36.711 00:31:36.711 --- 10.0.0.2 ping statistics --- 00:31:36.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.711 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:36.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:36.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:31:36.711 00:31:36.711 --- 10.0.0.1 ping statistics --- 00:31:36.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.711 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=3150616 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 3150616 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3150616 ']' 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:36.711 14:42:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:36.972 [2024-10-07 14:42:00.440618] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:31:36.972 [2024-10-07 14:42:00.440737] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.972 [2024-10-07 14:42:00.583502] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:37.233 [2024-10-07 14:42:00.768820] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:37.233 [2024-10-07 14:42:00.768872] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:37.233 [2024-10-07 14:42:00.768885] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:37.233 [2024-10-07 14:42:00.768898] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:37.233 [2024-10-07 14:42:00.768907] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:37.233 [2024-10-07 14:42:00.771133] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.233 [2024-10-07 14:42:00.771407] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:31:37.233 [2024-10-07 14:42:00.771532] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.233 [2024-10-07 14:42:00.771548] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.805 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:38.066 [2024-10-07 14:42:01.573610] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:38.066 Malloc1 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:38.066 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.067 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:38.067 [2024-10-07 14:42:01.672030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.067 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.067 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3150900 00:31:38.067 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:31:38.067 14:42:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:39.982 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:31:39.982 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.982 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:31:40.242 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.242 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:31:40.242 "tick_rate": 2400000000, 00:31:40.242 "poll_groups": [ 00:31:40.242 { 00:31:40.242 "name": "nvmf_tgt_poll_group_000", 00:31:40.242 "admin_qpairs": 1, 00:31:40.242 "io_qpairs": 1, 00:31:40.242 "current_admin_qpairs": 1, 00:31:40.242 "current_io_qpairs": 1, 00:31:40.242 "pending_bdev_io": 0, 00:31:40.242 "completed_nvme_io": 20091, 00:31:40.242 "transports": [ 00:31:40.242 { 00:31:40.242 "trtype": "TCP" 00:31:40.242 } 00:31:40.242 ] 00:31:40.242 }, 00:31:40.242 { 00:31:40.242 "name": "nvmf_tgt_poll_group_001", 00:31:40.242 "admin_qpairs": 0, 00:31:40.242 "io_qpairs": 1, 00:31:40.242 "current_admin_qpairs": 0, 00:31:40.242 "current_io_qpairs": 1, 00:31:40.242 "pending_bdev_io": 0, 00:31:40.242 "completed_nvme_io": 26181, 00:31:40.242 "transports": [ 00:31:40.242 { 00:31:40.242 "trtype": "TCP" 00:31:40.242 } 00:31:40.242 ] 00:31:40.242 }, 00:31:40.242 { 00:31:40.242 "name": "nvmf_tgt_poll_group_002", 00:31:40.242 "admin_qpairs": 0, 00:31:40.243 "io_qpairs": 1, 00:31:40.243 "current_admin_qpairs": 0, 00:31:40.243 "current_io_qpairs": 1, 00:31:40.243 "pending_bdev_io": 0, 00:31:40.243 "completed_nvme_io": 21554, 00:31:40.243 "transports": [ 00:31:40.243 { 00:31:40.243 "trtype": "TCP" 00:31:40.243 } 00:31:40.243 ] 00:31:40.243 }, 00:31:40.243 { 00:31:40.243 "name": "nvmf_tgt_poll_group_003", 00:31:40.243 "admin_qpairs": 0, 00:31:40.243 "io_qpairs": 1, 00:31:40.243 "current_admin_qpairs": 0, 00:31:40.243 "current_io_qpairs": 1, 00:31:40.243 "pending_bdev_io": 0, 00:31:40.243 "completed_nvme_io": 19834, 00:31:40.243 "transports": [ 00:31:40.243 { 00:31:40.243 "trtype": "TCP" 00:31:40.243 } 00:31:40.243 ] 00:31:40.243 } 00:31:40.243 ] 00:31:40.243 }' 00:31:40.243 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:31:40.243 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:31:40.243 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:31:40.243 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:31:40.243 14:42:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3150900 00:31:48.410 Initializing NVMe Controllers 00:31:48.410 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:48.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:31:48.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:31:48.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:31:48.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:31:48.410 Initialization complete. Launching workers. 00:31:48.410 ======================================================== 00:31:48.410 Latency(us) 00:31:48.410 Device Information : IOPS MiB/s Average min max 00:31:48.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 14039.37 54.84 4558.29 1559.37 9127.58 00:31:48.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14294.07 55.84 4476.64 1522.33 10240.18 00:31:48.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13937.67 54.44 4591.95 1654.40 9989.17 00:31:48.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11208.38 43.78 5724.41 1394.57 48121.38 00:31:48.410 ======================================================== 00:31:48.410 Total : 53479.50 208.90 4789.64 1394.57 48121.38 00:31:48.410 00:31:48.410 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:31:48.410 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:48.410 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:31:48.410 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:48.410 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:31:48.410 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:48.410 14:42:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:48.410 rmmod nvme_tcp 00:31:48.410 rmmod nvme_fabrics 00:31:48.410 rmmod nvme_keyring 00:31:48.410 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:48.410 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:31:48.410 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:31:48.410 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 3150616 ']' 00:31:48.410 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 3150616 00:31:48.410 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3150616 ']' 00:31:48.410 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3150616 00:31:48.410 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:31:48.410 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:48.410 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3150616 00:31:48.410 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:48.410 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:48.410 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3150616' 00:31:48.410 killing process with pid 3150616 00:31:48.410 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3150616 00:31:48.410 14:42:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3150616 00:31:49.351 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:49.351 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:49.351 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:49.351 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:31:49.351 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:31:49.351 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:49.351 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:31:49.611 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:49.611 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:49.611 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.611 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.611 14:42:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:51.523 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:51.523 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:31:51.523 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:31:51.523 14:42:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:31:53.434 14:42:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:31:55.349 14:42:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:00.643 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:00.644 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:00.644 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:00.644 Found net devices under 0000:31:00.0: cvl_0_0 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:00.644 Found net devices under 0000:31:00.1: cvl_0_1 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.644 14:42:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:00.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:32:00.644 00:32:00.644 --- 10.0.0.2 ping statistics --- 00:32:00.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.644 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:32:00.644 00:32:00.644 --- 10.0.0.1 ping statistics --- 00:32:00.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.644 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:32:00.644 net.core.busy_poll = 1 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:32:00.644 net.core.busy_read = 1 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:32:00.644 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:32:00.904 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:32:00.904 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:32:00.904 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:32:00.904 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:00.904 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:00.904 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:00.904 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:00.904 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=3156146 00:32:00.904 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 3156146 00:32:00.904 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:00.904 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3156146 ']' 00:32:00.904 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.904 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:00.904 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.904 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:00.904 14:42:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:00.904 [2024-10-07 14:42:24.607045] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:32:00.904 [2024-10-07 14:42:24.607160] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:01.165 [2024-10-07 14:42:24.730162] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:01.426 [2024-10-07 14:42:24.912778] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:01.426 [2024-10-07 14:42:24.912820] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:01.426 [2024-10-07 14:42:24.912831] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:01.426 [2024-10-07 14:42:24.912844] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:01.426 [2024-10-07 14:42:24.912854] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:01.426 [2024-10-07 14:42:24.915016] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.426 [2024-10-07 14:42:24.915087] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:01.426 [2024-10-07 14:42:24.915170] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.426 [2024-10-07 14:42:24.915190] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:32:01.687 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:01.687 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:32:01.687 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:01.687 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:01.687 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:01.949 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:01.949 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:32:01.949 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:32:01.949 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:32:01.949 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.949 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:01.949 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.949 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:32:01.949 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:32:01.949 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.949 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:01.949 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.949 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:32:01.949 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.949 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:02.210 [2024-10-07 14:42:25.727430] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:02.210 Malloc1 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:02.210 [2024-10-07 14:42:25.825762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3156498 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:32:02.210 14:42:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:04.755 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:32:04.755 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.755 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:04.755 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.755 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:32:04.755 "tick_rate": 2400000000, 00:32:04.755 "poll_groups": [ 00:32:04.755 { 00:32:04.755 "name": "nvmf_tgt_poll_group_000", 00:32:04.755 "admin_qpairs": 1, 00:32:04.755 "io_qpairs": 1, 00:32:04.755 "current_admin_qpairs": 1, 00:32:04.755 "current_io_qpairs": 1, 00:32:04.755 "pending_bdev_io": 0, 00:32:04.755 "completed_nvme_io": 25705, 00:32:04.755 "transports": [ 00:32:04.755 { 00:32:04.755 "trtype": "TCP" 00:32:04.755 } 00:32:04.755 ] 00:32:04.755 }, 00:32:04.755 { 00:32:04.755 "name": "nvmf_tgt_poll_group_001", 00:32:04.755 "admin_qpairs": 0, 00:32:04.755 "io_qpairs": 3, 00:32:04.755 "current_admin_qpairs": 0, 00:32:04.755 "current_io_qpairs": 3, 00:32:04.755 "pending_bdev_io": 0, 00:32:04.755 "completed_nvme_io": 37008, 00:32:04.755 "transports": [ 00:32:04.755 { 00:32:04.755 "trtype": "TCP" 00:32:04.755 } 00:32:04.755 ] 00:32:04.755 }, 00:32:04.755 { 00:32:04.755 "name": "nvmf_tgt_poll_group_002", 00:32:04.755 "admin_qpairs": 0, 00:32:04.755 "io_qpairs": 0, 00:32:04.755 "current_admin_qpairs": 0, 00:32:04.755 "current_io_qpairs": 0, 00:32:04.755 "pending_bdev_io": 0, 00:32:04.755 "completed_nvme_io": 0, 00:32:04.755 "transports": [ 00:32:04.755 { 00:32:04.755 "trtype": "TCP" 00:32:04.755 } 00:32:04.755 ] 00:32:04.755 }, 00:32:04.755 { 00:32:04.755 "name": "nvmf_tgt_poll_group_003", 00:32:04.755 "admin_qpairs": 0, 00:32:04.755 "io_qpairs": 0, 00:32:04.755 "current_admin_qpairs": 0, 00:32:04.755 "current_io_qpairs": 0, 00:32:04.755 "pending_bdev_io": 0, 00:32:04.755 "completed_nvme_io": 0, 00:32:04.755 "transports": [ 00:32:04.755 { 00:32:04.755 "trtype": "TCP" 00:32:04.755 } 00:32:04.755 ] 00:32:04.755 } 00:32:04.755 ] 00:32:04.755 }' 00:32:04.755 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:32:04.755 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:32:04.755 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:32:04.755 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:32:04.755 14:42:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3156498 00:32:12.895 Initializing NVMe Controllers 00:32:12.895 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:12.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:32:12.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:32:12.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:32:12.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:32:12.895 Initialization complete. Launching workers. 00:32:12.895 ======================================================== 00:32:12.895 Latency(us) 00:32:12.895 Device Information : IOPS MiB/s Average min max 00:32:12.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6697.90 26.16 9586.45 1322.60 55503.21 00:32:12.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6787.80 26.51 9429.88 1499.38 55322.05 00:32:12.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6460.40 25.24 9941.60 1497.30 55362.88 00:32:12.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 18259.60 71.33 3504.72 1354.58 44598.88 00:32:12.895 ======================================================== 00:32:12.895 Total : 38205.70 149.24 6712.05 1322.60 55503.21 00:32:12.895 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:12.895 rmmod nvme_tcp 00:32:12.895 rmmod nvme_fabrics 00:32:12.895 rmmod nvme_keyring 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 3156146 ']' 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 3156146 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3156146 ']' 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3156146 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3156146 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3156146' 00:32:12.895 killing process with pid 3156146 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3156146 00:32:12.895 14:42:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3156146 00:32:13.837 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:13.837 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:13.837 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:13.837 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:32:13.837 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:32:13.837 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:32:13.837 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:13.837 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:13.837 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:13.837 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.837 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:13.837 14:42:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.756 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:15.756 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:32:15.756 00:32:15.756 real 0m55.224s 00:32:15.756 user 2m54.364s 00:32:15.756 sys 0m12.338s 00:32:15.756 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:15.756 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:32:15.756 ************************************ 00:32:15.756 END TEST nvmf_perf_adq 00:32:15.756 ************************************ 00:32:15.756 14:42:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:32:15.756 14:42:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:15.756 14:42:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:15.756 14:42:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:32:15.756 ************************************ 00:32:15.756 START TEST nvmf_shutdown 00:32:15.756 ************************************ 00:32:15.756 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:32:15.756 * Looking for test storage... 00:32:16.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:16.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.017 --rc genhtml_branch_coverage=1 00:32:16.017 --rc genhtml_function_coverage=1 00:32:16.017 --rc genhtml_legend=1 00:32:16.017 --rc geninfo_all_blocks=1 00:32:16.017 --rc geninfo_unexecuted_blocks=1 00:32:16.017 00:32:16.017 ' 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:16.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.017 --rc genhtml_branch_coverage=1 00:32:16.017 --rc genhtml_function_coverage=1 00:32:16.017 --rc genhtml_legend=1 00:32:16.017 --rc geninfo_all_blocks=1 00:32:16.017 --rc geninfo_unexecuted_blocks=1 00:32:16.017 00:32:16.017 ' 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:16.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.017 --rc genhtml_branch_coverage=1 00:32:16.017 --rc genhtml_function_coverage=1 00:32:16.017 --rc genhtml_legend=1 00:32:16.017 --rc geninfo_all_blocks=1 00:32:16.017 --rc geninfo_unexecuted_blocks=1 00:32:16.017 00:32:16.017 ' 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:16.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:16.017 --rc genhtml_branch_coverage=1 00:32:16.017 --rc genhtml_function_coverage=1 00:32:16.017 --rc genhtml_legend=1 00:32:16.017 --rc geninfo_all_blocks=1 00:32:16.017 --rc geninfo_unexecuted_blocks=1 00:32:16.017 00:32:16.017 ' 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:16.017 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:16.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:16.018 ************************************ 00:32:16.018 START TEST nvmf_shutdown_tc1 00:32:16.018 ************************************ 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:32:16.018 14:42:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:24.158 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:24.158 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:24.158 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:24.159 Found net devices under 0000:31:00.0: cvl_0_0 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:24.159 Found net devices under 0000:31:00.1: cvl_0_1 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:24.159 14:42:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:24.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:24.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:32:24.159 00:32:24.159 --- 10.0.0.2 ping statistics --- 00:32:24.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.159 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:24.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:24.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:32:24.159 00:32:24.159 --- 10.0.0.1 ping statistics --- 00:32:24.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.159 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=3162922 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 3162922 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3162922 ']' 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:24.159 14:42:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:24.159 [2024-10-07 14:42:47.292401] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:32:24.159 [2024-10-07 14:42:47.292535] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:24.159 [2024-10-07 14:42:47.450086] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:24.159 [2024-10-07 14:42:47.680059] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:24.159 [2024-10-07 14:42:47.680131] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:24.159 [2024-10-07 14:42:47.680148] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:24.159 [2024-10-07 14:42:47.680170] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:24.159 [2024-10-07 14:42:47.680185] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:24.159 [2024-10-07 14:42:47.683188] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:24.159 [2024-10-07 14:42:47.683493] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:32:24.159 [2024-10-07 14:42:47.683617] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:24.159 [2024-10-07 14:42:47.683633] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:32:24.420 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:24.420 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:32:24.420 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:24.420 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:24.420 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:24.420 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:24.420 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:24.420 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.420 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:24.420 [2024-10-07 14:42:48.105123] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:24.420 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:24.420 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:32:24.420 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:32:24.420 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:24.420 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:24.420 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:24.420 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:24.420 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:24.680 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:24.680 Malloc1 00:32:24.680 [2024-10-07 14:42:48.251444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:24.680 Malloc2 00:32:24.680 Malloc3 00:32:24.939 Malloc4 00:32:24.939 Malloc5 00:32:24.939 Malloc6 00:32:25.199 Malloc7 00:32:25.199 Malloc8 00:32:25.199 Malloc9 00:32:25.461 Malloc10 00:32:25.461 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.461 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:32:25.461 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:25.461 14:42:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3163326 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3163326 /var/tmp/bdevperf.sock 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3163326 ']' 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:25.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:25.461 { 00:32:25.461 "params": { 00:32:25.461 "name": "Nvme$subsystem", 00:32:25.461 "trtype": "$TEST_TRANSPORT", 00:32:25.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.461 "adrfam": "ipv4", 00:32:25.461 "trsvcid": "$NVMF_PORT", 00:32:25.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.461 "hdgst": ${hdgst:-false}, 00:32:25.461 "ddgst": ${ddgst:-false} 00:32:25.461 }, 00:32:25.461 "method": "bdev_nvme_attach_controller" 00:32:25.461 } 00:32:25.461 EOF 00:32:25.461 )") 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:25.461 { 00:32:25.461 "params": { 00:32:25.461 "name": "Nvme$subsystem", 00:32:25.461 "trtype": "$TEST_TRANSPORT", 00:32:25.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.461 "adrfam": "ipv4", 00:32:25.461 "trsvcid": "$NVMF_PORT", 00:32:25.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.461 "hdgst": ${hdgst:-false}, 00:32:25.461 "ddgst": ${ddgst:-false} 00:32:25.461 }, 00:32:25.461 "method": "bdev_nvme_attach_controller" 00:32:25.461 } 00:32:25.461 EOF 00:32:25.461 )") 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:25.461 { 00:32:25.461 "params": { 00:32:25.461 "name": "Nvme$subsystem", 00:32:25.461 "trtype": "$TEST_TRANSPORT", 00:32:25.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.461 "adrfam": "ipv4", 00:32:25.461 "trsvcid": "$NVMF_PORT", 00:32:25.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.461 "hdgst": ${hdgst:-false}, 00:32:25.461 "ddgst": ${ddgst:-false} 00:32:25.461 }, 00:32:25.461 "method": "bdev_nvme_attach_controller" 00:32:25.461 } 00:32:25.461 EOF 00:32:25.461 )") 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:25.461 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:25.461 { 00:32:25.461 "params": { 00:32:25.461 "name": "Nvme$subsystem", 00:32:25.461 "trtype": "$TEST_TRANSPORT", 00:32:25.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.461 "adrfam": "ipv4", 00:32:25.461 "trsvcid": "$NVMF_PORT", 00:32:25.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.462 "hdgst": ${hdgst:-false}, 00:32:25.462 "ddgst": ${ddgst:-false} 00:32:25.462 }, 00:32:25.462 "method": "bdev_nvme_attach_controller" 00:32:25.462 } 00:32:25.462 EOF 00:32:25.462 )") 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:25.462 { 00:32:25.462 "params": { 00:32:25.462 "name": "Nvme$subsystem", 00:32:25.462 "trtype": "$TEST_TRANSPORT", 00:32:25.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.462 "adrfam": "ipv4", 00:32:25.462 "trsvcid": "$NVMF_PORT", 00:32:25.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.462 "hdgst": ${hdgst:-false}, 00:32:25.462 "ddgst": ${ddgst:-false} 00:32:25.462 }, 00:32:25.462 "method": "bdev_nvme_attach_controller" 00:32:25.462 } 00:32:25.462 EOF 00:32:25.462 )") 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:25.462 { 00:32:25.462 "params": { 00:32:25.462 "name": "Nvme$subsystem", 00:32:25.462 "trtype": "$TEST_TRANSPORT", 00:32:25.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.462 "adrfam": "ipv4", 00:32:25.462 "trsvcid": "$NVMF_PORT", 00:32:25.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.462 "hdgst": ${hdgst:-false}, 00:32:25.462 "ddgst": ${ddgst:-false} 00:32:25.462 }, 00:32:25.462 "method": "bdev_nvme_attach_controller" 00:32:25.462 } 00:32:25.462 EOF 00:32:25.462 )") 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:25.462 { 00:32:25.462 "params": { 00:32:25.462 "name": "Nvme$subsystem", 00:32:25.462 "trtype": "$TEST_TRANSPORT", 00:32:25.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.462 "adrfam": "ipv4", 00:32:25.462 "trsvcid": "$NVMF_PORT", 00:32:25.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.462 "hdgst": ${hdgst:-false}, 00:32:25.462 "ddgst": ${ddgst:-false} 00:32:25.462 }, 00:32:25.462 "method": "bdev_nvme_attach_controller" 00:32:25.462 } 00:32:25.462 EOF 00:32:25.462 )") 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:25.462 { 00:32:25.462 "params": { 00:32:25.462 "name": "Nvme$subsystem", 00:32:25.462 "trtype": "$TEST_TRANSPORT", 00:32:25.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.462 "adrfam": "ipv4", 00:32:25.462 "trsvcid": "$NVMF_PORT", 00:32:25.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.462 "hdgst": ${hdgst:-false}, 00:32:25.462 "ddgst": ${ddgst:-false} 00:32:25.462 }, 00:32:25.462 "method": "bdev_nvme_attach_controller" 00:32:25.462 } 00:32:25.462 EOF 00:32:25.462 )") 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:25.462 { 00:32:25.462 "params": { 00:32:25.462 "name": "Nvme$subsystem", 00:32:25.462 "trtype": "$TEST_TRANSPORT", 00:32:25.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.462 "adrfam": "ipv4", 00:32:25.462 "trsvcid": "$NVMF_PORT", 00:32:25.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.462 "hdgst": ${hdgst:-false}, 00:32:25.462 "ddgst": ${ddgst:-false} 00:32:25.462 }, 00:32:25.462 "method": "bdev_nvme_attach_controller" 00:32:25.462 } 00:32:25.462 EOF 00:32:25.462 )") 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:25.462 { 00:32:25.462 "params": { 00:32:25.462 "name": "Nvme$subsystem", 00:32:25.462 "trtype": "$TEST_TRANSPORT", 00:32:25.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:25.462 "adrfam": "ipv4", 00:32:25.462 "trsvcid": "$NVMF_PORT", 00:32:25.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:25.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:25.462 "hdgst": ${hdgst:-false}, 00:32:25.462 "ddgst": ${ddgst:-false} 00:32:25.462 }, 00:32:25.462 "method": "bdev_nvme_attach_controller" 00:32:25.462 } 00:32:25.462 EOF 00:32:25.462 )") 00:32:25.462 [2024-10-07 14:42:49.085190] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:32:25.462 [2024-10-07 14:42:49.085291] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:32:25.462 14:42:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:25.462 "params": { 00:32:25.462 "name": "Nvme1", 00:32:25.462 "trtype": "tcp", 00:32:25.462 "traddr": "10.0.0.2", 00:32:25.462 "adrfam": "ipv4", 00:32:25.462 "trsvcid": "4420", 00:32:25.462 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:25.462 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:25.462 "hdgst": false, 00:32:25.462 "ddgst": false 00:32:25.462 }, 00:32:25.462 "method": "bdev_nvme_attach_controller" 00:32:25.462 },{ 00:32:25.462 "params": { 00:32:25.462 "name": "Nvme2", 00:32:25.462 "trtype": "tcp", 00:32:25.462 "traddr": "10.0.0.2", 00:32:25.462 "adrfam": "ipv4", 00:32:25.462 "trsvcid": "4420", 00:32:25.462 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:25.462 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:25.462 "hdgst": false, 00:32:25.462 "ddgst": false 00:32:25.462 }, 00:32:25.462 "method": "bdev_nvme_attach_controller" 00:32:25.462 },{ 00:32:25.462 "params": { 00:32:25.462 "name": "Nvme3", 00:32:25.462 "trtype": "tcp", 00:32:25.462 "traddr": "10.0.0.2", 00:32:25.462 "adrfam": "ipv4", 00:32:25.462 "trsvcid": "4420", 00:32:25.462 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:32:25.462 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:32:25.462 "hdgst": false, 00:32:25.462 "ddgst": false 00:32:25.462 }, 00:32:25.462 "method": "bdev_nvme_attach_controller" 00:32:25.462 },{ 00:32:25.462 "params": { 00:32:25.462 "name": "Nvme4", 00:32:25.462 "trtype": "tcp", 00:32:25.462 "traddr": "10.0.0.2", 00:32:25.462 "adrfam": "ipv4", 00:32:25.462 "trsvcid": "4420", 00:32:25.462 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:32:25.462 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:32:25.462 "hdgst": false, 00:32:25.462 "ddgst": false 00:32:25.462 }, 00:32:25.462 "method": "bdev_nvme_attach_controller" 00:32:25.462 },{ 00:32:25.462 "params": { 00:32:25.462 "name": "Nvme5", 00:32:25.462 "trtype": "tcp", 00:32:25.462 "traddr": "10.0.0.2", 00:32:25.462 "adrfam": "ipv4", 00:32:25.462 "trsvcid": "4420", 00:32:25.462 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:32:25.462 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:32:25.462 "hdgst": false, 00:32:25.462 "ddgst": false 00:32:25.462 }, 00:32:25.462 "method": "bdev_nvme_attach_controller" 00:32:25.462 },{ 00:32:25.462 "params": { 00:32:25.462 "name": "Nvme6", 00:32:25.462 "trtype": "tcp", 00:32:25.462 "traddr": "10.0.0.2", 00:32:25.462 "adrfam": "ipv4", 00:32:25.462 "trsvcid": "4420", 00:32:25.462 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:32:25.462 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:32:25.462 "hdgst": false, 00:32:25.462 "ddgst": false 00:32:25.462 }, 00:32:25.462 "method": "bdev_nvme_attach_controller" 00:32:25.462 },{ 00:32:25.462 "params": { 00:32:25.462 "name": "Nvme7", 00:32:25.462 "trtype": "tcp", 00:32:25.462 "traddr": "10.0.0.2", 00:32:25.462 "adrfam": "ipv4", 00:32:25.462 "trsvcid": "4420", 00:32:25.462 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:32:25.462 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:32:25.462 "hdgst": false, 00:32:25.462 "ddgst": false 00:32:25.462 }, 00:32:25.462 "method": "bdev_nvme_attach_controller" 00:32:25.462 },{ 00:32:25.462 "params": { 00:32:25.462 "name": "Nvme8", 00:32:25.462 "trtype": "tcp", 00:32:25.462 "traddr": "10.0.0.2", 00:32:25.462 "adrfam": "ipv4", 00:32:25.462 "trsvcid": "4420", 00:32:25.462 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:32:25.462 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:32:25.462 "hdgst": false, 00:32:25.462 "ddgst": false 00:32:25.462 }, 00:32:25.462 "method": "bdev_nvme_attach_controller" 00:32:25.462 },{ 00:32:25.462 "params": { 00:32:25.462 "name": "Nvme9", 00:32:25.462 "trtype": "tcp", 00:32:25.462 "traddr": "10.0.0.2", 00:32:25.462 "adrfam": "ipv4", 00:32:25.463 "trsvcid": "4420", 00:32:25.463 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:32:25.463 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:32:25.463 "hdgst": false, 00:32:25.463 "ddgst": false 00:32:25.463 }, 00:32:25.463 "method": "bdev_nvme_attach_controller" 00:32:25.463 },{ 00:32:25.463 "params": { 00:32:25.463 "name": "Nvme10", 00:32:25.463 "trtype": "tcp", 00:32:25.463 "traddr": "10.0.0.2", 00:32:25.463 "adrfam": "ipv4", 00:32:25.463 "trsvcid": "4420", 00:32:25.463 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:32:25.463 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:32:25.463 "hdgst": false, 00:32:25.463 "ddgst": false 00:32:25.463 }, 00:32:25.463 "method": "bdev_nvme_attach_controller" 00:32:25.463 }' 00:32:25.723 [2024-10-07 14:42:49.204050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.723 [2024-10-07 14:42:49.386410] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.270 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:28.270 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:32:28.270 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:28.270 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.270 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:28.270 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.270 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3163326 00:32:28.270 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:32:28.270 14:42:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:32:29.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3163326 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3162922 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:29.212 { 00:32:29.212 "params": { 00:32:29.212 "name": "Nvme$subsystem", 00:32:29.212 "trtype": "$TEST_TRANSPORT", 00:32:29.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:29.212 "adrfam": "ipv4", 00:32:29.212 "trsvcid": "$NVMF_PORT", 00:32:29.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:29.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:29.212 "hdgst": ${hdgst:-false}, 00:32:29.212 "ddgst": ${ddgst:-false} 00:32:29.212 }, 00:32:29.212 "method": "bdev_nvme_attach_controller" 00:32:29.212 } 00:32:29.212 EOF 00:32:29.212 )") 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:29.212 { 00:32:29.212 "params": { 00:32:29.212 "name": "Nvme$subsystem", 00:32:29.212 "trtype": "$TEST_TRANSPORT", 00:32:29.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:29.212 "adrfam": "ipv4", 00:32:29.212 "trsvcid": "$NVMF_PORT", 00:32:29.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:29.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:29.212 "hdgst": ${hdgst:-false}, 00:32:29.212 "ddgst": ${ddgst:-false} 00:32:29.212 }, 00:32:29.212 "method": "bdev_nvme_attach_controller" 00:32:29.212 } 00:32:29.212 EOF 00:32:29.212 )") 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:29.212 { 00:32:29.212 "params": { 00:32:29.212 "name": "Nvme$subsystem", 00:32:29.212 "trtype": "$TEST_TRANSPORT", 00:32:29.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:29.212 "adrfam": "ipv4", 00:32:29.212 "trsvcid": "$NVMF_PORT", 00:32:29.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:29.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:29.212 "hdgst": ${hdgst:-false}, 00:32:29.212 "ddgst": ${ddgst:-false} 00:32:29.212 }, 00:32:29.212 "method": "bdev_nvme_attach_controller" 00:32:29.212 } 00:32:29.212 EOF 00:32:29.212 )") 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:29.212 { 00:32:29.212 "params": { 00:32:29.212 "name": "Nvme$subsystem", 00:32:29.212 "trtype": "$TEST_TRANSPORT", 00:32:29.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:29.212 "adrfam": "ipv4", 00:32:29.212 "trsvcid": "$NVMF_PORT", 00:32:29.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:29.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:29.212 "hdgst": ${hdgst:-false}, 00:32:29.212 "ddgst": ${ddgst:-false} 00:32:29.212 }, 00:32:29.212 "method": "bdev_nvme_attach_controller" 00:32:29.212 } 00:32:29.212 EOF 00:32:29.212 )") 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:29.212 { 00:32:29.212 "params": { 00:32:29.212 "name": "Nvme$subsystem", 00:32:29.212 "trtype": "$TEST_TRANSPORT", 00:32:29.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:29.212 "adrfam": "ipv4", 00:32:29.212 "trsvcid": "$NVMF_PORT", 00:32:29.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:29.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:29.212 "hdgst": ${hdgst:-false}, 00:32:29.212 "ddgst": ${ddgst:-false} 00:32:29.212 }, 00:32:29.212 "method": "bdev_nvme_attach_controller" 00:32:29.212 } 00:32:29.212 EOF 00:32:29.212 )") 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:29.212 { 00:32:29.212 "params": { 00:32:29.212 "name": "Nvme$subsystem", 00:32:29.212 "trtype": "$TEST_TRANSPORT", 00:32:29.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:29.212 "adrfam": "ipv4", 00:32:29.212 "trsvcid": "$NVMF_PORT", 00:32:29.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:29.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:29.212 "hdgst": ${hdgst:-false}, 00:32:29.212 "ddgst": ${ddgst:-false} 00:32:29.212 }, 00:32:29.212 "method": "bdev_nvme_attach_controller" 00:32:29.212 } 00:32:29.212 EOF 00:32:29.212 )") 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:29.212 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:29.212 { 00:32:29.212 "params": { 00:32:29.212 "name": "Nvme$subsystem", 00:32:29.212 "trtype": "$TEST_TRANSPORT", 00:32:29.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:29.212 "adrfam": "ipv4", 00:32:29.212 "trsvcid": "$NVMF_PORT", 00:32:29.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:29.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:29.213 "hdgst": ${hdgst:-false}, 00:32:29.213 "ddgst": ${ddgst:-false} 00:32:29.213 }, 00:32:29.213 "method": "bdev_nvme_attach_controller" 00:32:29.213 } 00:32:29.213 EOF 00:32:29.213 )") 00:32:29.213 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:29.213 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:29.213 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:29.213 { 00:32:29.213 "params": { 00:32:29.213 "name": "Nvme$subsystem", 00:32:29.213 "trtype": "$TEST_TRANSPORT", 00:32:29.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:29.213 "adrfam": "ipv4", 00:32:29.213 "trsvcid": "$NVMF_PORT", 00:32:29.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:29.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:29.213 "hdgst": ${hdgst:-false}, 00:32:29.213 "ddgst": ${ddgst:-false} 00:32:29.213 }, 00:32:29.213 "method": "bdev_nvme_attach_controller" 00:32:29.213 } 00:32:29.213 EOF 00:32:29.213 )") 00:32:29.213 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:29.213 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:29.213 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:29.213 { 00:32:29.213 "params": { 00:32:29.213 "name": "Nvme$subsystem", 00:32:29.213 "trtype": "$TEST_TRANSPORT", 00:32:29.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:29.213 "adrfam": "ipv4", 00:32:29.213 "trsvcid": "$NVMF_PORT", 00:32:29.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:29.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:29.213 "hdgst": ${hdgst:-false}, 00:32:29.213 "ddgst": ${ddgst:-false} 00:32:29.213 }, 00:32:29.213 "method": "bdev_nvme_attach_controller" 00:32:29.213 } 00:32:29.213 EOF 00:32:29.213 )") 00:32:29.213 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:29.213 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:29.213 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:29.213 { 00:32:29.213 "params": { 00:32:29.213 "name": "Nvme$subsystem", 00:32:29.213 "trtype": "$TEST_TRANSPORT", 00:32:29.213 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:29.213 "adrfam": "ipv4", 00:32:29.213 "trsvcid": "$NVMF_PORT", 00:32:29.213 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:29.213 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:29.213 "hdgst": ${hdgst:-false}, 00:32:29.213 "ddgst": ${ddgst:-false} 00:32:29.213 }, 00:32:29.213 "method": "bdev_nvme_attach_controller" 00:32:29.213 } 00:32:29.213 EOF 00:32:29.213 )") 00:32:29.213 [2024-10-07 14:42:52.643173] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:32:29.213 [2024-10-07 14:42:52.643289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3164012 ] 00:32:29.213 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:32:29.213 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:32:29.213 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:32:29.213 14:42:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:29.213 "params": { 00:32:29.213 "name": "Nvme1", 00:32:29.213 "trtype": "tcp", 00:32:29.213 "traddr": "10.0.0.2", 00:32:29.213 "adrfam": "ipv4", 00:32:29.213 "trsvcid": "4420", 00:32:29.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:29.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:29.213 "hdgst": false, 00:32:29.213 "ddgst": false 00:32:29.213 }, 00:32:29.213 "method": "bdev_nvme_attach_controller" 00:32:29.213 },{ 00:32:29.213 "params": { 00:32:29.213 "name": "Nvme2", 00:32:29.213 "trtype": "tcp", 00:32:29.213 "traddr": "10.0.0.2", 00:32:29.213 "adrfam": "ipv4", 00:32:29.213 "trsvcid": "4420", 00:32:29.213 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:29.213 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:29.213 "hdgst": false, 00:32:29.213 "ddgst": false 00:32:29.213 }, 00:32:29.213 "method": "bdev_nvme_attach_controller" 00:32:29.213 },{ 00:32:29.213 "params": { 00:32:29.213 "name": "Nvme3", 00:32:29.213 "trtype": "tcp", 00:32:29.213 "traddr": "10.0.0.2", 00:32:29.213 "adrfam": "ipv4", 00:32:29.213 "trsvcid": "4420", 00:32:29.213 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:32:29.213 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:32:29.213 "hdgst": false, 00:32:29.213 "ddgst": false 00:32:29.213 }, 00:32:29.213 "method": "bdev_nvme_attach_controller" 00:32:29.213 },{ 00:32:29.213 "params": { 00:32:29.213 "name": "Nvme4", 00:32:29.213 "trtype": "tcp", 00:32:29.213 "traddr": "10.0.0.2", 00:32:29.213 "adrfam": "ipv4", 00:32:29.213 "trsvcid": "4420", 00:32:29.213 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:32:29.213 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:32:29.213 "hdgst": false, 00:32:29.213 "ddgst": false 00:32:29.213 }, 00:32:29.213 "method": "bdev_nvme_attach_controller" 00:32:29.213 },{ 00:32:29.213 "params": { 00:32:29.213 "name": "Nvme5", 00:32:29.213 "trtype": "tcp", 00:32:29.213 "traddr": "10.0.0.2", 00:32:29.213 "adrfam": "ipv4", 00:32:29.213 "trsvcid": "4420", 00:32:29.213 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:32:29.213 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:32:29.213 "hdgst": false, 00:32:29.213 "ddgst": false 00:32:29.213 }, 00:32:29.213 "method": "bdev_nvme_attach_controller" 00:32:29.213 },{ 00:32:29.213 "params": { 00:32:29.213 "name": "Nvme6", 00:32:29.213 "trtype": "tcp", 00:32:29.213 "traddr": "10.0.0.2", 00:32:29.213 "adrfam": "ipv4", 00:32:29.213 "trsvcid": "4420", 00:32:29.213 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:32:29.213 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:32:29.213 "hdgst": false, 00:32:29.213 "ddgst": false 00:32:29.213 }, 00:32:29.213 "method": "bdev_nvme_attach_controller" 00:32:29.213 },{ 00:32:29.213 "params": { 00:32:29.213 "name": "Nvme7", 00:32:29.213 "trtype": "tcp", 00:32:29.213 "traddr": "10.0.0.2", 00:32:29.213 "adrfam": "ipv4", 00:32:29.213 "trsvcid": "4420", 00:32:29.213 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:32:29.213 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:32:29.213 "hdgst": false, 00:32:29.213 "ddgst": false 00:32:29.213 }, 00:32:29.213 "method": "bdev_nvme_attach_controller" 00:32:29.213 },{ 00:32:29.213 "params": { 00:32:29.213 "name": "Nvme8", 00:32:29.213 "trtype": "tcp", 00:32:29.213 "traddr": "10.0.0.2", 00:32:29.213 "adrfam": "ipv4", 00:32:29.213 "trsvcid": "4420", 00:32:29.213 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:32:29.213 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:32:29.213 "hdgst": false, 00:32:29.213 "ddgst": false 00:32:29.213 }, 00:32:29.213 "method": "bdev_nvme_attach_controller" 00:32:29.213 },{ 00:32:29.213 "params": { 00:32:29.213 "name": "Nvme9", 00:32:29.213 "trtype": "tcp", 00:32:29.213 "traddr": "10.0.0.2", 00:32:29.213 "adrfam": "ipv4", 00:32:29.213 "trsvcid": "4420", 00:32:29.213 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:32:29.213 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:32:29.213 "hdgst": false, 00:32:29.213 "ddgst": false 00:32:29.213 }, 00:32:29.213 "method": "bdev_nvme_attach_controller" 00:32:29.213 },{ 00:32:29.213 "params": { 00:32:29.213 "name": "Nvme10", 00:32:29.213 "trtype": "tcp", 00:32:29.213 "traddr": "10.0.0.2", 00:32:29.213 "adrfam": "ipv4", 00:32:29.213 "trsvcid": "4420", 00:32:29.213 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:32:29.213 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:32:29.213 "hdgst": false, 00:32:29.213 "ddgst": false 00:32:29.213 }, 00:32:29.213 "method": "bdev_nvme_attach_controller" 00:32:29.213 }' 00:32:29.213 [2024-10-07 14:42:52.759596] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.473 [2024-10-07 14:42:52.940419] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.856 Running I/O for 1 seconds... 00:32:31.796 1733.00 IOPS, 108.31 MiB/s 00:32:31.796 Latency(us) 00:32:31.796 [2024-10-07T12:42:55.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:31.796 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:31.796 Verification LBA range: start 0x0 length 0x400 00:32:31.796 Nvme1n1 : 1.16 225.00 14.06 0.00 0.00 280832.26 20206.93 228939.09 00:32:31.796 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:31.796 Verification LBA range: start 0x0 length 0x400 00:32:31.796 Nvme2n1 : 1.17 219.32 13.71 0.00 0.00 283940.69 31894.19 277872.64 00:32:31.796 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:31.796 Verification LBA range: start 0x0 length 0x400 00:32:31.796 Nvme3n1 : 1.14 228.03 14.25 0.00 0.00 265794.48 9775.79 248162.99 00:32:31.796 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:31.796 Verification LBA range: start 0x0 length 0x400 00:32:31.796 Nvme4n1 : 1.14 225.54 14.10 0.00 0.00 266042.45 17148.59 277872.64 00:32:31.796 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:31.796 Verification LBA range: start 0x0 length 0x400 00:32:31.796 Nvme5n1 : 1.20 213.82 13.36 0.00 0.00 276682.88 16711.68 288358.40 00:32:31.796 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:31.796 Verification LBA range: start 0x0 length 0x400 00:32:31.796 Nvme6n1 : 1.15 222.35 13.90 0.00 0.00 260311.47 17913.17 270882.13 00:32:31.796 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:31.796 Verification LBA range: start 0x0 length 0x400 00:32:31.796 Nvme7n1 : 1.16 219.94 13.75 0.00 0.00 258479.15 17476.27 270882.13 00:32:31.796 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:31.796 Verification LBA range: start 0x0 length 0x400 00:32:31.796 Nvme8n1 : 1.16 225.46 14.09 0.00 0.00 245127.19 5242.88 269134.51 00:32:31.796 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:31.796 Verification LBA range: start 0x0 length 0x400 00:32:31.796 Nvme9n1 : 1.20 212.57 13.29 0.00 0.00 258818.99 17585.49 288358.40 00:32:31.796 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:31.796 Verification LBA range: start 0x0 length 0x400 00:32:31.796 Nvme10n1 : 1.21 263.75 16.48 0.00 0.00 204924.42 8574.29 291853.65 00:32:31.796 [2024-10-07T12:42:55.505Z] =================================================================================================================== 00:32:31.796 [2024-10-07T12:42:55.505Z] Total : 2255.77 140.99 0.00 0.00 258768.12 5242.88 291853.65 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:32.738 rmmod nvme_tcp 00:32:32.738 rmmod nvme_fabrics 00:32:32.738 rmmod nvme_keyring 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 3162922 ']' 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 3162922 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3162922 ']' 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3162922 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3162922 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3162922' 00:32:32.738 killing process with pid 3162922 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3162922 00:32:32.738 14:42:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3162922 00:32:34.649 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:34.649 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:34.649 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:34.649 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:32:34.649 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:32:34.649 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:34.649 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:32:34.650 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:34.650 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:34.650 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.650 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.650 14:42:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.563 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:36.563 00:32:36.563 real 0m20.407s 00:32:36.563 user 0m48.198s 00:32:36.563 sys 0m7.311s 00:32:36.563 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:36.563 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:36.563 ************************************ 00:32:36.563 END TEST nvmf_shutdown_tc1 00:32:36.563 ************************************ 00:32:36.563 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:32:36.563 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:36.563 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:36.563 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:36.563 ************************************ 00:32:36.563 START TEST nvmf_shutdown_tc2 00:32:36.563 ************************************ 00:32:36.563 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:32:36.563 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:32:36.563 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:32:36.563 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:36.563 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:36.563 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:36.564 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:36.564 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:36.564 Found net devices under 0000:31:00.0: cvl_0_0 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:36.564 Found net devices under 0000:31:00.1: cvl_0_1 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:36.564 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:36.826 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:36.826 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:36.826 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:36.826 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:36.826 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:36.826 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:36.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:36.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:32:36.827 00:32:36.827 --- 10.0.0.2 ping statistics --- 00:32:36.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.827 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:36.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:36.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:32:36.827 00:32:36.827 --- 10.0.0.1 ping statistics --- 00:32:36.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.827 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3165570 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3165570 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3165570 ']' 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:36.827 14:43:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:37.088 [2024-10-07 14:43:00.589810] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:32:37.088 [2024-10-07 14:43:00.589910] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:37.088 [2024-10-07 14:43:00.726318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:37.349 [2024-10-07 14:43:00.864704] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:37.349 [2024-10-07 14:43:00.864751] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:37.349 [2024-10-07 14:43:00.864763] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:37.349 [2024-10-07 14:43:00.864776] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:37.349 [2024-10-07 14:43:00.864787] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:37.349 [2024-10-07 14:43:00.866583] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:37.349 [2024-10-07 14:43:00.866724] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:32:37.349 [2024-10-07 14:43:00.866822] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:37.349 [2024-10-07 14:43:00.866846] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:37.920 [2024-10-07 14:43:01.444767] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:37.920 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:37.921 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:37.921 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:37.921 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:37.921 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:37.921 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:37.921 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:37.921 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:37.921 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:37.921 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:37.921 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:37.921 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:37.921 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:37.921 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:37.921 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:37.921 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:32:37.921 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:32:37.921 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.921 14:43:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:37.921 Malloc1 00:32:37.921 [2024-10-07 14:43:01.574451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.921 Malloc2 00:32:38.181 Malloc3 00:32:38.181 Malloc4 00:32:38.181 Malloc5 00:32:38.181 Malloc6 00:32:38.442 Malloc7 00:32:38.442 Malloc8 00:32:38.442 Malloc9 00:32:38.704 Malloc10 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3165957 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3165957 /var/tmp/bdevperf.sock 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3165957 ']' 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:38.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:38.704 { 00:32:38.704 "params": { 00:32:38.704 "name": "Nvme$subsystem", 00:32:38.704 "trtype": "$TEST_TRANSPORT", 00:32:38.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:38.704 "adrfam": "ipv4", 00:32:38.704 "trsvcid": "$NVMF_PORT", 00:32:38.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:38.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:38.704 "hdgst": ${hdgst:-false}, 00:32:38.704 "ddgst": ${ddgst:-false} 00:32:38.704 }, 00:32:38.704 "method": "bdev_nvme_attach_controller" 00:32:38.704 } 00:32:38.704 EOF 00:32:38.704 )") 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:38.704 { 00:32:38.704 "params": { 00:32:38.704 "name": "Nvme$subsystem", 00:32:38.704 "trtype": "$TEST_TRANSPORT", 00:32:38.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:38.704 "adrfam": "ipv4", 00:32:38.704 "trsvcid": "$NVMF_PORT", 00:32:38.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:38.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:38.704 "hdgst": ${hdgst:-false}, 00:32:38.704 "ddgst": ${ddgst:-false} 00:32:38.704 }, 00:32:38.704 "method": "bdev_nvme_attach_controller" 00:32:38.704 } 00:32:38.704 EOF 00:32:38.704 )") 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:38.704 { 00:32:38.704 "params": { 00:32:38.704 "name": "Nvme$subsystem", 00:32:38.704 "trtype": "$TEST_TRANSPORT", 00:32:38.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:38.704 "adrfam": "ipv4", 00:32:38.704 "trsvcid": "$NVMF_PORT", 00:32:38.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:38.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:38.704 "hdgst": ${hdgst:-false}, 00:32:38.704 "ddgst": ${ddgst:-false} 00:32:38.704 }, 00:32:38.704 "method": "bdev_nvme_attach_controller" 00:32:38.704 } 00:32:38.704 EOF 00:32:38.704 )") 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:38.704 { 00:32:38.704 "params": { 00:32:38.704 "name": "Nvme$subsystem", 00:32:38.704 "trtype": "$TEST_TRANSPORT", 00:32:38.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:38.704 "adrfam": "ipv4", 00:32:38.704 "trsvcid": "$NVMF_PORT", 00:32:38.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:38.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:38.704 "hdgst": ${hdgst:-false}, 00:32:38.704 "ddgst": ${ddgst:-false} 00:32:38.704 }, 00:32:38.704 "method": "bdev_nvme_attach_controller" 00:32:38.704 } 00:32:38.704 EOF 00:32:38.704 )") 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:38.704 { 00:32:38.704 "params": { 00:32:38.704 "name": "Nvme$subsystem", 00:32:38.704 "trtype": "$TEST_TRANSPORT", 00:32:38.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:38.704 "adrfam": "ipv4", 00:32:38.704 "trsvcid": "$NVMF_PORT", 00:32:38.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:38.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:38.704 "hdgst": ${hdgst:-false}, 00:32:38.704 "ddgst": ${ddgst:-false} 00:32:38.704 }, 00:32:38.704 "method": "bdev_nvme_attach_controller" 00:32:38.704 } 00:32:38.704 EOF 00:32:38.704 )") 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:38.704 { 00:32:38.704 "params": { 00:32:38.704 "name": "Nvme$subsystem", 00:32:38.704 "trtype": "$TEST_TRANSPORT", 00:32:38.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:38.704 "adrfam": "ipv4", 00:32:38.704 "trsvcid": "$NVMF_PORT", 00:32:38.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:38.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:38.704 "hdgst": ${hdgst:-false}, 00:32:38.704 "ddgst": ${ddgst:-false} 00:32:38.704 }, 00:32:38.704 "method": "bdev_nvme_attach_controller" 00:32:38.704 } 00:32:38.704 EOF 00:32:38.704 )") 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:38.704 { 00:32:38.704 "params": { 00:32:38.704 "name": "Nvme$subsystem", 00:32:38.704 "trtype": "$TEST_TRANSPORT", 00:32:38.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:38.704 "adrfam": "ipv4", 00:32:38.704 "trsvcid": "$NVMF_PORT", 00:32:38.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:38.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:38.704 "hdgst": ${hdgst:-false}, 00:32:38.704 "ddgst": ${ddgst:-false} 00:32:38.704 }, 00:32:38.704 "method": "bdev_nvme_attach_controller" 00:32:38.704 } 00:32:38.704 EOF 00:32:38.704 )") 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:38.704 { 00:32:38.704 "params": { 00:32:38.704 "name": "Nvme$subsystem", 00:32:38.704 "trtype": "$TEST_TRANSPORT", 00:32:38.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:38.704 "adrfam": "ipv4", 00:32:38.704 "trsvcid": "$NVMF_PORT", 00:32:38.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:38.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:38.704 "hdgst": ${hdgst:-false}, 00:32:38.704 "ddgst": ${ddgst:-false} 00:32:38.704 }, 00:32:38.704 "method": "bdev_nvme_attach_controller" 00:32:38.704 } 00:32:38.704 EOF 00:32:38.704 )") 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:38.704 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:38.704 { 00:32:38.704 "params": { 00:32:38.704 "name": "Nvme$subsystem", 00:32:38.705 "trtype": "$TEST_TRANSPORT", 00:32:38.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:38.705 "adrfam": "ipv4", 00:32:38.705 "trsvcid": "$NVMF_PORT", 00:32:38.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:38.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:38.705 "hdgst": ${hdgst:-false}, 00:32:38.705 "ddgst": ${ddgst:-false} 00:32:38.705 }, 00:32:38.705 "method": "bdev_nvme_attach_controller" 00:32:38.705 } 00:32:38.705 EOF 00:32:38.705 )") 00:32:38.705 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:32:38.705 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:38.705 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:38.705 { 00:32:38.705 "params": { 00:32:38.705 "name": "Nvme$subsystem", 00:32:38.705 "trtype": "$TEST_TRANSPORT", 00:32:38.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:38.705 "adrfam": "ipv4", 00:32:38.705 "trsvcid": "$NVMF_PORT", 00:32:38.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:38.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:38.705 "hdgst": ${hdgst:-false}, 00:32:38.705 "ddgst": ${ddgst:-false} 00:32:38.705 }, 00:32:38.705 "method": "bdev_nvme_attach_controller" 00:32:38.705 } 00:32:38.705 EOF 00:32:38.705 )") 00:32:38.705 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:32:38.705 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:32:38.705 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:32:38.705 14:43:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:38.705 "params": { 00:32:38.705 "name": "Nvme1", 00:32:38.705 "trtype": "tcp", 00:32:38.705 "traddr": "10.0.0.2", 00:32:38.705 "adrfam": "ipv4", 00:32:38.705 "trsvcid": "4420", 00:32:38.705 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:38.705 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:38.705 "hdgst": false, 00:32:38.705 "ddgst": false 00:32:38.705 }, 00:32:38.705 "method": "bdev_nvme_attach_controller" 00:32:38.705 },{ 00:32:38.705 "params": { 00:32:38.705 "name": "Nvme2", 00:32:38.705 "trtype": "tcp", 00:32:38.705 "traddr": "10.0.0.2", 00:32:38.705 "adrfam": "ipv4", 00:32:38.705 "trsvcid": "4420", 00:32:38.705 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:38.705 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:38.705 "hdgst": false, 00:32:38.705 "ddgst": false 00:32:38.705 }, 00:32:38.705 "method": "bdev_nvme_attach_controller" 00:32:38.705 },{ 00:32:38.705 "params": { 00:32:38.705 "name": "Nvme3", 00:32:38.705 "trtype": "tcp", 00:32:38.705 "traddr": "10.0.0.2", 00:32:38.705 "adrfam": "ipv4", 00:32:38.705 "trsvcid": "4420", 00:32:38.705 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:32:38.705 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:32:38.705 "hdgst": false, 00:32:38.705 "ddgst": false 00:32:38.705 }, 00:32:38.705 "method": "bdev_nvme_attach_controller" 00:32:38.705 },{ 00:32:38.705 "params": { 00:32:38.705 "name": "Nvme4", 00:32:38.705 "trtype": "tcp", 00:32:38.705 "traddr": "10.0.0.2", 00:32:38.705 "adrfam": "ipv4", 00:32:38.705 "trsvcid": "4420", 00:32:38.705 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:32:38.705 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:32:38.705 "hdgst": false, 00:32:38.705 "ddgst": false 00:32:38.705 }, 00:32:38.705 "method": "bdev_nvme_attach_controller" 00:32:38.705 },{ 00:32:38.705 "params": { 00:32:38.705 "name": "Nvme5", 00:32:38.705 "trtype": "tcp", 00:32:38.705 "traddr": "10.0.0.2", 00:32:38.705 "adrfam": "ipv4", 00:32:38.705 "trsvcid": "4420", 00:32:38.705 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:32:38.705 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:32:38.705 "hdgst": false, 00:32:38.705 "ddgst": false 00:32:38.705 }, 00:32:38.705 "method": "bdev_nvme_attach_controller" 00:32:38.705 },{ 00:32:38.705 "params": { 00:32:38.705 "name": "Nvme6", 00:32:38.705 "trtype": "tcp", 00:32:38.705 "traddr": "10.0.0.2", 00:32:38.705 "adrfam": "ipv4", 00:32:38.705 "trsvcid": "4420", 00:32:38.705 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:32:38.705 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:32:38.705 "hdgst": false, 00:32:38.705 "ddgst": false 00:32:38.705 }, 00:32:38.705 "method": "bdev_nvme_attach_controller" 00:32:38.705 },{ 00:32:38.705 "params": { 00:32:38.705 "name": "Nvme7", 00:32:38.705 "trtype": "tcp", 00:32:38.705 "traddr": "10.0.0.2", 00:32:38.705 "adrfam": "ipv4", 00:32:38.705 "trsvcid": "4420", 00:32:38.705 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:32:38.705 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:32:38.705 "hdgst": false, 00:32:38.705 "ddgst": false 00:32:38.705 }, 00:32:38.705 "method": "bdev_nvme_attach_controller" 00:32:38.705 },{ 00:32:38.705 "params": { 00:32:38.705 "name": "Nvme8", 00:32:38.705 "trtype": "tcp", 00:32:38.705 "traddr": "10.0.0.2", 00:32:38.705 "adrfam": "ipv4", 00:32:38.705 "trsvcid": "4420", 00:32:38.705 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:32:38.705 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:32:38.705 "hdgst": false, 00:32:38.705 "ddgst": false 00:32:38.705 }, 00:32:38.705 "method": "bdev_nvme_attach_controller" 00:32:38.705 },{ 00:32:38.705 "params": { 00:32:38.705 "name": "Nvme9", 00:32:38.705 "trtype": "tcp", 00:32:38.705 "traddr": "10.0.0.2", 00:32:38.705 "adrfam": "ipv4", 00:32:38.705 "trsvcid": "4420", 00:32:38.705 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:32:38.705 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:32:38.705 "hdgst": false, 00:32:38.705 "ddgst": false 00:32:38.705 }, 00:32:38.705 "method": "bdev_nvme_attach_controller" 00:32:38.705 },{ 00:32:38.705 "params": { 00:32:38.705 "name": "Nvme10", 00:32:38.705 "trtype": "tcp", 00:32:38.705 "traddr": "10.0.0.2", 00:32:38.705 "adrfam": "ipv4", 00:32:38.705 "trsvcid": "4420", 00:32:38.705 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:32:38.705 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:32:38.705 "hdgst": false, 00:32:38.705 "ddgst": false 00:32:38.705 }, 00:32:38.705 "method": "bdev_nvme_attach_controller" 00:32:38.705 }' 00:32:38.705 [2024-10-07 14:43:02.306416] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:32:38.705 [2024-10-07 14:43:02.306522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3165957 ] 00:32:38.966 [2024-10-07 14:43:02.427469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.966 [2024-10-07 14:43:02.607659] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.880 Running I/O for 10 seconds... 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.141 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=8 00:32:41.142 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 8 -ge 100 ']' 00:32:41.142 14:43:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:32:41.403 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:32:41.403 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:32:41.403 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:32:41.403 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:32:41.403 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.403 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:41.664 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.664 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:32:41.664 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:32:41.664 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:32:41.664 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:32:41.664 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:32:41.664 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3165957 00:32:41.665 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3165957 ']' 00:32:41.665 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3165957 00:32:41.665 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:32:41.665 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:41.665 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3165957 00:32:41.665 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:41.665 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:41.665 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3165957' 00:32:41.665 killing process with pid 3165957 00:32:41.665 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3165957 00:32:41.665 14:43:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3165957 00:32:41.665 Received shutdown signal, test time was about 0.852638 seconds 00:32:41.665 00:32:41.665 Latency(us) 00:32:41.665 [2024-10-07T12:43:05.374Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.665 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:41.665 Verification LBA range: start 0x0 length 0x400 00:32:41.665 Nvme1n1 : 0.83 231.78 14.49 0.00 0.00 271757.37 20534.61 260396.37 00:32:41.665 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:41.665 Verification LBA range: start 0x0 length 0x400 00:32:41.665 Nvme2n1 : 0.83 230.27 14.39 0.00 0.00 267235.27 15510.19 255153.49 00:32:41.665 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:41.665 Verification LBA range: start 0x0 length 0x400 00:32:41.665 Nvme3n1 : 0.82 235.57 14.72 0.00 0.00 254409.96 18568.53 265639.25 00:32:41.665 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:41.665 Verification LBA range: start 0x0 length 0x400 00:32:41.665 Nvme4n1 : 0.82 234.20 14.64 0.00 0.00 249327.79 17913.17 262144.00 00:32:41.665 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:41.665 Verification LBA range: start 0x0 length 0x400 00:32:41.665 Nvme5n1 : 0.84 228.64 14.29 0.00 0.00 249235.34 18240.85 248162.99 00:32:41.665 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:41.665 Verification LBA range: start 0x0 length 0x400 00:32:41.665 Nvme6n1 : 0.84 227.52 14.22 0.00 0.00 243840.00 22609.92 270882.13 00:32:41.665 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:41.665 Verification LBA range: start 0x0 length 0x400 00:32:41.665 Nvme7n1 : 0.81 240.90 15.06 0.00 0.00 221596.37 2457.60 263891.63 00:32:41.665 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:41.665 Verification LBA range: start 0x0 length 0x400 00:32:41.665 Nvme8n1 : 0.83 232.09 14.51 0.00 0.00 224017.92 27852.80 262144.00 00:32:41.665 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:41.665 Verification LBA range: start 0x0 length 0x400 00:32:41.665 Nvme9n1 : 0.80 160.71 10.04 0.00 0.00 312511.15 35607.89 265639.25 00:32:41.665 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:41.665 Verification LBA range: start 0x0 length 0x400 00:32:41.665 Nvme10n1 : 0.85 225.43 14.09 0.00 0.00 219996.73 17367.04 286610.77 00:32:41.665 [2024-10-07T12:43:05.374Z] =================================================================================================================== 00:32:41.665 [2024-10-07T12:43:05.374Z] Total : 2247.13 140.45 0.00 0.00 249240.58 2457.60 286610.77 00:32:42.608 14:43:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3165570 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:43.550 rmmod nvme_tcp 00:32:43.550 rmmod nvme_fabrics 00:32:43.550 rmmod nvme_keyring 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 3165570 ']' 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 3165570 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3165570 ']' 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3165570 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3165570 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3165570' 00:32:43.550 killing process with pid 3165570 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3165570 00:32:43.550 14:43:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3165570 00:32:45.460 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:45.461 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:45.461 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:45.461 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:32:45.461 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:32:45.461 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:45.461 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:32:45.461 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:45.461 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:45.461 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.461 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.461 14:43:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.372 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:47.373 00:32:47.373 real 0m10.728s 00:32:47.373 user 0m34.824s 00:32:47.373 sys 0m1.596s 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:47.373 ************************************ 00:32:47.373 END TEST nvmf_shutdown_tc2 00:32:47.373 ************************************ 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:47.373 ************************************ 00:32:47.373 START TEST nvmf_shutdown_tc3 00:32:47.373 ************************************ 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:47.373 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:47.373 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:47.373 Found net devices under 0000:31:00.0: cvl_0_0 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:47.373 Found net devices under 0000:31:00.1: cvl_0_1 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:47.373 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:47.374 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:47.374 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:47.374 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:47.374 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:47.374 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:47.374 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:47.374 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:47.374 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:47.374 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:47.374 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:47.374 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:47.374 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:47.374 14:43:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:47.634 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:47.634 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:47.634 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:47.634 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:47.634 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:47.634 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:47.634 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:47.634 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:47.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:47.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:32:47.634 00:32:47.634 --- 10.0.0.2 ping statistics --- 00:32:47.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.634 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:32:47.634 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:47.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:47.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:32:47.634 00:32:47.634 --- 10.0.0.1 ping statistics --- 00:32:47.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.634 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:32:47.634 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:47.634 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:32:47.634 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:47.634 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:47.634 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:47.635 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:47.635 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:47.635 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:47.635 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:47.635 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:32:47.635 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:47.635 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:47.635 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:47.635 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=3167761 00:32:47.635 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 3167761 00:32:47.635 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:32:47.635 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3167761 ']' 00:32:47.635 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.635 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:47.635 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.635 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:47.635 14:43:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:47.894 [2024-10-07 14:43:11.419782] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:32:47.894 [2024-10-07 14:43:11.419891] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:47.894 [2024-10-07 14:43:11.531601] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:48.154 [2024-10-07 14:43:11.673040] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:48.154 [2024-10-07 14:43:11.673077] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:48.154 [2024-10-07 14:43:11.673089] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:48.154 [2024-10-07 14:43:11.673102] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:48.154 [2024-10-07 14:43:11.673112] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:48.154 [2024-10-07 14:43:11.674931] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:48.154 [2024-10-07 14:43:11.675102] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:32:48.154 [2024-10-07 14:43:11.675211] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.154 [2024-10-07 14:43:11.675234] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:32:48.726 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:48.726 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:32:48.726 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:48.726 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:48.726 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:48.726 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:48.726 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:48.726 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.726 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:48.726 [2024-10-07 14:43:12.223804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:48.726 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.726 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:32:48.726 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:32:48.726 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:48.726 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.727 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:48.727 Malloc1 00:32:48.727 [2024-10-07 14:43:12.357300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:48.727 Malloc2 00:32:48.987 Malloc3 00:32:48.987 Malloc4 00:32:48.987 Malloc5 00:32:48.987 Malloc6 00:32:49.248 Malloc7 00:32:49.248 Malloc8 00:32:49.248 Malloc9 00:32:49.248 Malloc10 00:32:49.509 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.509 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:32:49.509 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:49.509 14:43:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:49.509 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3168141 00:32:49.509 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3168141 /var/tmp/bdevperf.sock 00:32:49.509 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3168141 ']' 00:32:49.509 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:49.509 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:49.509 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:49.509 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:49.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:49.509 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:32:49.509 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:49.509 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:49.509 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:32:49.509 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:32:49.509 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:49.509 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:49.509 { 00:32:49.509 "params": { 00:32:49.509 "name": "Nvme$subsystem", 00:32:49.509 "trtype": "$TEST_TRANSPORT", 00:32:49.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:49.509 "adrfam": "ipv4", 00:32:49.509 "trsvcid": "$NVMF_PORT", 00:32:49.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:49.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:49.509 "hdgst": ${hdgst:-false}, 00:32:49.509 "ddgst": ${ddgst:-false} 00:32:49.509 }, 00:32:49.509 "method": "bdev_nvme_attach_controller" 00:32:49.509 } 00:32:49.509 EOF 00:32:49.509 )") 00:32:49.509 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:32:49.509 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:49.509 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:49.509 { 00:32:49.509 "params": { 00:32:49.509 "name": "Nvme$subsystem", 00:32:49.509 "trtype": "$TEST_TRANSPORT", 00:32:49.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:49.510 "adrfam": "ipv4", 00:32:49.510 "trsvcid": "$NVMF_PORT", 00:32:49.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:49.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:49.510 "hdgst": ${hdgst:-false}, 00:32:49.510 "ddgst": ${ddgst:-false} 00:32:49.510 }, 00:32:49.510 "method": "bdev_nvme_attach_controller" 00:32:49.510 } 00:32:49.510 EOF 00:32:49.510 )") 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:49.510 { 00:32:49.510 "params": { 00:32:49.510 "name": "Nvme$subsystem", 00:32:49.510 "trtype": "$TEST_TRANSPORT", 00:32:49.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:49.510 "adrfam": "ipv4", 00:32:49.510 "trsvcid": "$NVMF_PORT", 00:32:49.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:49.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:49.510 "hdgst": ${hdgst:-false}, 00:32:49.510 "ddgst": ${ddgst:-false} 00:32:49.510 }, 00:32:49.510 "method": "bdev_nvme_attach_controller" 00:32:49.510 } 00:32:49.510 EOF 00:32:49.510 )") 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:49.510 { 00:32:49.510 "params": { 00:32:49.510 "name": "Nvme$subsystem", 00:32:49.510 "trtype": "$TEST_TRANSPORT", 00:32:49.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:49.510 "adrfam": "ipv4", 00:32:49.510 "trsvcid": "$NVMF_PORT", 00:32:49.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:49.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:49.510 "hdgst": ${hdgst:-false}, 00:32:49.510 "ddgst": ${ddgst:-false} 00:32:49.510 }, 00:32:49.510 "method": "bdev_nvme_attach_controller" 00:32:49.510 } 00:32:49.510 EOF 00:32:49.510 )") 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:49.510 { 00:32:49.510 "params": { 00:32:49.510 "name": "Nvme$subsystem", 00:32:49.510 "trtype": "$TEST_TRANSPORT", 00:32:49.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:49.510 "adrfam": "ipv4", 00:32:49.510 "trsvcid": "$NVMF_PORT", 00:32:49.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:49.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:49.510 "hdgst": ${hdgst:-false}, 00:32:49.510 "ddgst": ${ddgst:-false} 00:32:49.510 }, 00:32:49.510 "method": "bdev_nvme_attach_controller" 00:32:49.510 } 00:32:49.510 EOF 00:32:49.510 )") 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:49.510 { 00:32:49.510 "params": { 00:32:49.510 "name": "Nvme$subsystem", 00:32:49.510 "trtype": "$TEST_TRANSPORT", 00:32:49.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:49.510 "adrfam": "ipv4", 00:32:49.510 "trsvcid": "$NVMF_PORT", 00:32:49.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:49.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:49.510 "hdgst": ${hdgst:-false}, 00:32:49.510 "ddgst": ${ddgst:-false} 00:32:49.510 }, 00:32:49.510 "method": "bdev_nvme_attach_controller" 00:32:49.510 } 00:32:49.510 EOF 00:32:49.510 )") 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:49.510 { 00:32:49.510 "params": { 00:32:49.510 "name": "Nvme$subsystem", 00:32:49.510 "trtype": "$TEST_TRANSPORT", 00:32:49.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:49.510 "adrfam": "ipv4", 00:32:49.510 "trsvcid": "$NVMF_PORT", 00:32:49.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:49.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:49.510 "hdgst": ${hdgst:-false}, 00:32:49.510 "ddgst": ${ddgst:-false} 00:32:49.510 }, 00:32:49.510 "method": "bdev_nvme_attach_controller" 00:32:49.510 } 00:32:49.510 EOF 00:32:49.510 )") 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:49.510 { 00:32:49.510 "params": { 00:32:49.510 "name": "Nvme$subsystem", 00:32:49.510 "trtype": "$TEST_TRANSPORT", 00:32:49.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:49.510 "adrfam": "ipv4", 00:32:49.510 "trsvcid": "$NVMF_PORT", 00:32:49.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:49.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:49.510 "hdgst": ${hdgst:-false}, 00:32:49.510 "ddgst": ${ddgst:-false} 00:32:49.510 }, 00:32:49.510 "method": "bdev_nvme_attach_controller" 00:32:49.510 } 00:32:49.510 EOF 00:32:49.510 )") 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:49.510 { 00:32:49.510 "params": { 00:32:49.510 "name": "Nvme$subsystem", 00:32:49.510 "trtype": "$TEST_TRANSPORT", 00:32:49.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:49.510 "adrfam": "ipv4", 00:32:49.510 "trsvcid": "$NVMF_PORT", 00:32:49.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:49.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:49.510 "hdgst": ${hdgst:-false}, 00:32:49.510 "ddgst": ${ddgst:-false} 00:32:49.510 }, 00:32:49.510 "method": "bdev_nvme_attach_controller" 00:32:49.510 } 00:32:49.510 EOF 00:32:49.510 )") 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:49.510 { 00:32:49.510 "params": { 00:32:49.510 "name": "Nvme$subsystem", 00:32:49.510 "trtype": "$TEST_TRANSPORT", 00:32:49.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:49.510 "adrfam": "ipv4", 00:32:49.510 "trsvcid": "$NVMF_PORT", 00:32:49.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:49.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:49.510 "hdgst": ${hdgst:-false}, 00:32:49.510 "ddgst": ${ddgst:-false} 00:32:49.510 }, 00:32:49.510 "method": "bdev_nvme_attach_controller" 00:32:49.510 } 00:32:49.510 EOF 00:32:49.510 )") 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:32:49.510 [2024-10-07 14:43:13.081160] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:32:49.510 [2024-10-07 14:43:13.081266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3168141 ] 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:32:49.510 14:43:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:49.510 "params": { 00:32:49.510 "name": "Nvme1", 00:32:49.510 "trtype": "tcp", 00:32:49.510 "traddr": "10.0.0.2", 00:32:49.510 "adrfam": "ipv4", 00:32:49.510 "trsvcid": "4420", 00:32:49.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:49.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:49.510 "hdgst": false, 00:32:49.510 "ddgst": false 00:32:49.510 }, 00:32:49.510 "method": "bdev_nvme_attach_controller" 00:32:49.510 },{ 00:32:49.510 "params": { 00:32:49.510 "name": "Nvme2", 00:32:49.510 "trtype": "tcp", 00:32:49.510 "traddr": "10.0.0.2", 00:32:49.510 "adrfam": "ipv4", 00:32:49.510 "trsvcid": "4420", 00:32:49.510 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:49.510 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:49.510 "hdgst": false, 00:32:49.510 "ddgst": false 00:32:49.510 }, 00:32:49.510 "method": "bdev_nvme_attach_controller" 00:32:49.510 },{ 00:32:49.510 "params": { 00:32:49.510 "name": "Nvme3", 00:32:49.510 "trtype": "tcp", 00:32:49.510 "traddr": "10.0.0.2", 00:32:49.510 "adrfam": "ipv4", 00:32:49.510 "trsvcid": "4420", 00:32:49.510 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:32:49.510 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:32:49.510 "hdgst": false, 00:32:49.510 "ddgst": false 00:32:49.510 }, 00:32:49.510 "method": "bdev_nvme_attach_controller" 00:32:49.510 },{ 00:32:49.510 "params": { 00:32:49.510 "name": "Nvme4", 00:32:49.510 "trtype": "tcp", 00:32:49.510 "traddr": "10.0.0.2", 00:32:49.510 "adrfam": "ipv4", 00:32:49.510 "trsvcid": "4420", 00:32:49.510 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:32:49.510 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:32:49.510 "hdgst": false, 00:32:49.510 "ddgst": false 00:32:49.510 }, 00:32:49.510 "method": "bdev_nvme_attach_controller" 00:32:49.510 },{ 00:32:49.510 "params": { 00:32:49.510 "name": "Nvme5", 00:32:49.510 "trtype": "tcp", 00:32:49.510 "traddr": "10.0.0.2", 00:32:49.511 "adrfam": "ipv4", 00:32:49.511 "trsvcid": "4420", 00:32:49.511 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:32:49.511 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:32:49.511 "hdgst": false, 00:32:49.511 "ddgst": false 00:32:49.511 }, 00:32:49.511 "method": "bdev_nvme_attach_controller" 00:32:49.511 },{ 00:32:49.511 "params": { 00:32:49.511 "name": "Nvme6", 00:32:49.511 "trtype": "tcp", 00:32:49.511 "traddr": "10.0.0.2", 00:32:49.511 "adrfam": "ipv4", 00:32:49.511 "trsvcid": "4420", 00:32:49.511 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:32:49.511 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:32:49.511 "hdgst": false, 00:32:49.511 "ddgst": false 00:32:49.511 }, 00:32:49.511 "method": "bdev_nvme_attach_controller" 00:32:49.511 },{ 00:32:49.511 "params": { 00:32:49.511 "name": "Nvme7", 00:32:49.511 "trtype": "tcp", 00:32:49.511 "traddr": "10.0.0.2", 00:32:49.511 "adrfam": "ipv4", 00:32:49.511 "trsvcid": "4420", 00:32:49.511 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:32:49.511 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:32:49.511 "hdgst": false, 00:32:49.511 "ddgst": false 00:32:49.511 }, 00:32:49.511 "method": "bdev_nvme_attach_controller" 00:32:49.511 },{ 00:32:49.511 "params": { 00:32:49.511 "name": "Nvme8", 00:32:49.511 "trtype": "tcp", 00:32:49.511 "traddr": "10.0.0.2", 00:32:49.511 "adrfam": "ipv4", 00:32:49.511 "trsvcid": "4420", 00:32:49.511 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:32:49.511 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:32:49.511 "hdgst": false, 00:32:49.511 "ddgst": false 00:32:49.511 }, 00:32:49.511 "method": "bdev_nvme_attach_controller" 00:32:49.511 },{ 00:32:49.511 "params": { 00:32:49.511 "name": "Nvme9", 00:32:49.511 "trtype": "tcp", 00:32:49.511 "traddr": "10.0.0.2", 00:32:49.511 "adrfam": "ipv4", 00:32:49.511 "trsvcid": "4420", 00:32:49.511 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:32:49.511 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:32:49.511 "hdgst": false, 00:32:49.511 "ddgst": false 00:32:49.511 }, 00:32:49.511 "method": "bdev_nvme_attach_controller" 00:32:49.511 },{ 00:32:49.511 "params": { 00:32:49.511 "name": "Nvme10", 00:32:49.511 "trtype": "tcp", 00:32:49.511 "traddr": "10.0.0.2", 00:32:49.511 "adrfam": "ipv4", 00:32:49.511 "trsvcid": "4420", 00:32:49.511 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:32:49.511 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:32:49.511 "hdgst": false, 00:32:49.511 "ddgst": false 00:32:49.511 }, 00:32:49.511 "method": "bdev_nvme_attach_controller" 00:32:49.511 }' 00:32:49.511 [2024-10-07 14:43:13.198982] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.772 [2024-10-07 14:43:13.382674] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.687 Running I/O for 10 seconds... 00:32:51.948 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:51.948 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:32:51.948 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:51.948 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.948 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:51.948 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.948 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:51.948 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:32:51.948 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:51.948 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:32:51.948 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:32:51.948 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:32:51.948 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:32:51.948 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:32:51.949 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:32:51.949 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:32:51.949 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.949 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:52.210 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.210 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:32:52.210 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:32:52.210 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:32:52.470 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:32:52.470 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:32:52.470 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:32:52.470 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:32:52.470 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.470 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:52.470 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.470 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:32:52.470 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:32:52.470 14:43:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3167761 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3167761 ']' 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3167761 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3167761 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3167761' 00:32:52.744 killing process with pid 3167761 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3167761 00:32:52.744 14:43:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3167761 00:32:52.744 [2024-10-07 14:43:16.321085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.744 [2024-10-07 14:43:16.321342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.321547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.322974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.745 [2024-10-07 14:43:16.323363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.323370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.323376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.323383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.323389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.323396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.323402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.323409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.323416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.324993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.325266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.326949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.326975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.326984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.326991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.746 [2024-10-07 14:43:16.326998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.327395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.747 [2024-10-07 14:43:16.329277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.329530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.748 [2024-10-07 14:43:16.333772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.333961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:32:52.749 [2024-10-07 14:43:16.341134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.749 [2024-10-07 14:43:16.341869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.749 [2024-10-07 14:43:16.341879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.341893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.341904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.341917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.341928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.341941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.341954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.341968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.341980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.341993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.750 [2024-10-07 14:43:16.342779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.342833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:32:52.750 [2024-10-07 14:43:16.343061] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150003ab400 was disconnected and freed. reset controller. 00:32:52.750 [2024-10-07 14:43:16.343267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.750 [2024-10-07 14:43:16.343291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.343307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.750 [2024-10-07 14:43:16.343318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.343330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.750 [2024-10-07 14:43:16.343342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.343358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.750 [2024-10-07 14:43:16.343370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.750 [2024-10-07 14:43:16.343388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a6b80 is same with the state(6) to be set 00:32:52.750 [2024-10-07 14:43:16.343440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.343454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.343467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.343478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.343490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.343501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.343513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.343524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.343534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a2080 is same with the state(6) to be set 00:32:52.751 [2024-10-07 14:43:16.343570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.343583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.343595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.343607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.343622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.343639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.343652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.343662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.343673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3e80 is same with the state(6) to be set 00:32:52.751 [2024-10-07 14:43:16.343711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.343724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.343737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.343748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.343761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.343774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.343786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.343797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.343807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a4d80 is same with the state(6) to be set 00:32:52.751 [2024-10-07 14:43:16.343833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.343846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.343860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.343870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.343883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.343894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.343905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.343917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.343927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a5c80 is same with the state(6) to be set 00:32:52.751 [2024-10-07 14:43:16.343962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.343975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.343988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.343999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.344031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.344053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a2f80 is same with the state(6) to be set 00:32:52.751 [2024-10-07 14:43:16.344098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.344112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.344138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.344161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.344185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344196] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039f100 is same with the state(6) to be set 00:32:52.751 [2024-10-07 14:43:16.344227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.344240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.344262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.344285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.344309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a1180 is same with the state(6) to be set 00:32:52.751 [2024-10-07 14:43:16.344352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.344365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.344387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.344410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.344432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:32:52.751 [2024-10-07 14:43:16.344476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.344490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.344516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.344546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.751 [2024-10-07 14:43:16.344569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.344580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0280 is same with the state(6) to be set 00:32:52.751 [2024-10-07 14:43:16.345595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.751 [2024-10-07 14:43:16.345624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.751 [2024-10-07 14:43:16.345656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.345669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.345683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.345694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.345707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.345718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.345733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.345743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.345756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.345767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.345780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.345792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.345806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.345816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.345829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.345841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.345856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.345869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.345882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.345893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.345906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.345917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.345931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.345942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.345955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.345967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.345980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.345991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.752 [2024-10-07 14:43:16.346661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.752 [2024-10-07 14:43:16.346675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.346685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.346698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.346709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.346722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.346733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.346746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.346756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.346770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.346782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.346794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.346806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.346819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.346829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.346843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.346853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.346867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.346878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.346892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.346903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.346916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.346926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.346940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.346950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.346963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.346974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.346987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.346998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.347016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.347026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.347040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.347051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.347063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.347075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.347089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.347101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.347115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.347125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.347140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.347182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.347196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.347207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.347221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.347231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.347244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003aaf00 is same with the state(6) to be set 00:32:52.753 [2024-10-07 14:43:16.347457] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150003aaf00 was disconnected and freed. reset controller. 00:32:52.753 [2024-10-07 14:43:16.349055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.349081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.349100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.349112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.349125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.349136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.349150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.349160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.349174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.349185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.349197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.349209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.349222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.349237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.349250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.349260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.349273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.349285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.349298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.349309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.349323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.349333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.349347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.349358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.349372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.349382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.349396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.349407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.753 [2024-10-07 14:43:16.349420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.753 [2024-10-07 14:43:16.349431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.349974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.349985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.754 [2024-10-07 14:43:16.350448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.754 [2024-10-07 14:43:16.350461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.350471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.350485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.350496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.350509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.350520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.350533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.350556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.350570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.350581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.350595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.350605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.350620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.350631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.350850] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150003aa000 was disconnected and freed. reset controller. 00:32:52.755 [2024-10-07 14:43:16.350936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.350950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.350967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.350979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.350993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.755 [2024-10-07 14:43:16.351782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.755 [2024-10-07 14:43:16.351795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.351806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.351819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.351830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.351843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.351854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.351867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.351878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.351891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.351902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.351915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.351926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.351939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.351949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.351962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.351973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.351990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.756 [2024-10-07 14:43:16.352524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.756 [2024-10-07 14:43:16.352712] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150003aaa00 was disconnected and freed. reset controller. 00:32:52.756 [2024-10-07 14:43:16.354048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:32:52.756 [2024-10-07 14:43:16.354092] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:32:52.756 [2024-10-07 14:43:16.354114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3e80 (9): Bad file descriptor 00:32:52.756 [2024-10-07 14:43:16.354134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a4d80 (9): Bad file descriptor 00:32:52.756 [2024-10-07 14:43:16.354166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a6b80 (9): Bad file descriptor 00:32:52.756 [2024-10-07 14:43:16.354197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a2080 (9): Bad file descriptor 00:32:52.756 [2024-10-07 14:43:16.354227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a5c80 (9): Bad file descriptor 00:32:52.756 [2024-10-07 14:43:16.354252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a2f80 (9): Bad file descriptor 00:32:52.756 [2024-10-07 14:43:16.354271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039f100 (9): Bad file descriptor 00:32:52.756 [2024-10-07 14:43:16.354300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a1180 (9): Bad file descriptor 00:32:52.756 [2024-10-07 14:43:16.354317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:32:52.756 [2024-10-07 14:43:16.354338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0280 (9): Bad file descriptor 00:32:52.756 [2024-10-07 14:43:16.357605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:32:52.756 [2024-10-07 14:43:16.357634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:32:52.756 [2024-10-07 14:43:16.359106] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:52.756 [2024-10-07 14:43:16.359165] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:52.756 [2024-10-07 14:43:16.359545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.756 [2024-10-07 14:43:16.359573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a4d80 with addr=10.0.0.2, port=4420 00:32:52.756 [2024-10-07 14:43:16.359587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a4d80 is same with the state(6) to be set 00:32:52.756 [2024-10-07 14:43:16.359874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.756 [2024-10-07 14:43:16.359890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3e80 with addr=10.0.0.2, port=4420 00:32:52.756 [2024-10-07 14:43:16.359901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3e80 is same with the state(6) to be set 00:32:52.756 [2024-10-07 14:43:16.360215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.756 [2024-10-07 14:43:16.360232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a1180 with addr=10.0.0.2, port=4420 00:32:52.756 [2024-10-07 14:43:16.360242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a1180 is same with the state(6) to be set 00:32:52.756 [2024-10-07 14:43:16.360568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.757 [2024-10-07 14:43:16.360582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a2f80 with addr=10.0.0.2, port=4420 00:32:52.757 [2024-10-07 14:43:16.360593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a2f80 is same with the state(6) to be set 00:32:52.757 [2024-10-07 14:43:16.360645] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:52.757 [2024-10-07 14:43:16.360691] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:52.757 [2024-10-07 14:43:16.360732] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:52.757 [2024-10-07 14:43:16.361231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.361253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.361272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.361283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.361299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003aa500 is same with the state(6) to be set 00:32:52.757 [2024-10-07 14:43:16.361486] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150003aa500 was disconnected and freed. reset controller. 00:32:52.757 [2024-10-07 14:43:16.362051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a4d80 (9): Bad file descriptor 00:32:52.757 [2024-10-07 14:43:16.362078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3e80 (9): Bad file descriptor 00:32:52.757 [2024-10-07 14:43:16.362091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a1180 (9): Bad file descriptor 00:32:52.757 [2024-10-07 14:43:16.362104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a2f80 (9): Bad file descriptor 00:32:52.757 [2024-10-07 14:43:16.363294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:32:52.757 [2024-10-07 14:43:16.363334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:32:52.757 [2024-10-07 14:43:16.363346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:32:52.757 [2024-10-07 14:43:16.363359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:32:52.757 [2024-10-07 14:43:16.363379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:32:52.757 [2024-10-07 14:43:16.363389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:32:52.757 [2024-10-07 14:43:16.363400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:32:52.757 [2024-10-07 14:43:16.363416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:32:52.757 [2024-10-07 14:43:16.363425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:32:52.757 [2024-10-07 14:43:16.363435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:32:52.757 [2024-10-07 14:43:16.363450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:32:52.757 [2024-10-07 14:43:16.363460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:32:52.757 [2024-10-07 14:43:16.363470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:32:52.757 [2024-10-07 14:43:16.363556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.757 [2024-10-07 14:43:16.363571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.757 [2024-10-07 14:43:16.363581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.757 [2024-10-07 14:43:16.363591] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.757 [2024-10-07 14:43:16.363803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.757 [2024-10-07 14:43:16.363823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a2080 with addr=10.0.0.2, port=4420 00:32:52.757 [2024-10-07 14:43:16.363834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a2080 is same with the state(6) to be set 00:32:52.757 [2024-10-07 14:43:16.364304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a2080 (9): Bad file descriptor 00:32:52.757 [2024-10-07 14:43:16.364497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:32:52.757 [2024-10-07 14:43:16.364513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:32:52.757 [2024-10-07 14:43:16.364524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:32:52.757 [2024-10-07 14:43:16.364576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.364591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.364610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.364622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.364636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.364647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.364660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.364672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.364685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.364697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.364711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.364722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.364736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.364747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.364760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.364771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.364785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.364797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.364810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.364821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.364834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.364846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.364859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.364871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.364885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.364899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.364913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.364924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.364937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.364948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.364962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.364974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.364987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.364998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.365018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.365030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.365043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.365054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.365068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.365079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.365094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.757 [2024-10-07 14:43:16.365105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.757 [2024-10-07 14:43:16.365120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.365982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.365993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.366017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.366029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.366042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.366054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.366068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.366078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.366092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.366103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.366116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.366129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.366143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.366154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.758 [2024-10-07 14:43:16.366168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.758 [2024-10-07 14:43:16.366181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.366196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.366208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.366220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a9100 is same with the state(6) to be set 00:32:52.759 [2024-10-07 14:43:16.367693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.367713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.367729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.367740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.367754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.367764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.367778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.367789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.367803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.367814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.367827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.367838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.367852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.367864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.367878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.367889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.367902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.367913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.367927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.367938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.367951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.367965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.367980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.367991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.759 [2024-10-07 14:43:16.368703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.759 [2024-10-07 14:43:16.368717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.368728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.368742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.368753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.368766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.368778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.368790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.368801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.368815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.368826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.368839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.368850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.368864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.368875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.368888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.368899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.368913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.368926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.368940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.368952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.368965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.368977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.368990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.369004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.369019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.369030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.369044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.369055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.369069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.369080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.369092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.369104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.369118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.369128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.369141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.369152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.369165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.369177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.369189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.369201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.369214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.369225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.369240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.369252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.369265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.369277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.369290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.369302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.369313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a9600 is same with the state(6) to be set 00:32:52.760 [2024-10-07 14:43:16.370784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.370804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.370819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.370830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.370844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.370854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.370867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.370879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.370892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.370903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.370916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.370927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.370940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.370952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.370966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.370977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.370991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.371007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.371024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.371036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.371049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.371060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.371074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.371085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.371099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.371111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.371124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.371135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.371148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.371160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.371174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.371186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.371200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.371211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.371225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.760 [2024-10-07 14:43:16.371236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.760 [2024-10-07 14:43:16.371249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.371980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.371993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.372009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.372023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.372034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.372048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.372060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.372073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.372085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.372098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.372109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.372123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.372134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.372149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.372161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.372176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.372188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.761 [2024-10-07 14:43:16.372202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.761 [2024-10-07 14:43:16.372213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.372227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.372239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.372252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.372264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.372278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.372288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.372304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.372316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.372329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.372341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.372355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.372367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.372380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.372392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.372405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.372416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.372429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a9b00 is same with the state(6) to be set 00:32:52.762 [2024-10-07 14:43:16.373941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.373960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.373977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.373988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.762 [2024-10-07 14:43:16.374789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.762 [2024-10-07 14:43:16.374803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.374814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.374827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.374838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.374853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.374863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.374877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.374889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.374902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.374914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.374928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.374940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.374953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.374964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.374978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.374989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.375582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.375594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003ab900 is same with the state(6) to be set 00:32:52.763 [2024-10-07 14:43:16.379582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.379609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.379625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.379636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.379649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.379660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.379673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.379684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.379697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.379708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.379725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.379736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.379749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.379760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.379775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.379787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.379800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.379811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.379826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.763 [2024-10-07 14:43:16.379837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.763 [2024-10-07 14:43:16.379851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.379863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.379877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.379888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.379902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.379913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.379926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.379937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.379950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.379961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.379975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.379986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.764 [2024-10-07 14:43:16.380901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.764 [2024-10-07 14:43:16.380914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.765 [2024-10-07 14:43:16.380925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.765 [2024-10-07 14:43:16.380939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.765 [2024-10-07 14:43:16.380950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.765 [2024-10-07 14:43:16.380963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.765 [2024-10-07 14:43:16.380975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.765 [2024-10-07 14:43:16.380988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.765 [2024-10-07 14:43:16.381004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.765 [2024-10-07 14:43:16.381018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.765 [2024-10-07 14:43:16.381030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.765 [2024-10-07 14:43:16.381043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.765 [2024-10-07 14:43:16.381055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.765 [2024-10-07 14:43:16.381068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.765 [2024-10-07 14:43:16.381079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.765 [2024-10-07 14:43:16.381093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.765 [2024-10-07 14:43:16.381105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.765 [2024-10-07 14:43:16.381118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.765 [2024-10-07 14:43:16.381129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.765 [2024-10-07 14:43:16.381143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.765 [2024-10-07 14:43:16.381154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.765 [2024-10-07 14:43:16.381167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.765 [2024-10-07 14:43:16.381179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.765 [2024-10-07 14:43:16.381192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:52.765 [2024-10-07 14:43:16.381203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.765 [2024-10-07 14:43:16.381215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003abe00 is same with the state(6) to be set 00:32:52.765 [2024-10-07 14:43:16.385154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:52.765 [2024-10-07 14:43:16.385180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:52.765 [2024-10-07 14:43:16.385215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:32:52.765 [2024-10-07 14:43:16.385229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:32:52.765 [2024-10-07 14:43:16.385327] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:52.765 [2024-10-07 14:43:16.385348] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:52.765 [2024-10-07 14:43:16.402801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:32:52.765 task offset: 27136 on job bdev=Nvme8n1 fails 00:32:52.765 00:32:52.765 Latency(us) 00:32:52.765 [2024-10-07T12:43:16.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.765 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:52.765 Job: Nvme1n1 ended in about 0.99 seconds with error 00:32:52.765 Verification LBA range: start 0x0 length 0x400 00:32:52.765 Nvme1n1 : 0.99 128.82 8.05 64.41 0.00 327506.49 18022.40 269134.51 00:32:52.765 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:52.765 Job: Nvme2n1 ended in about 1.00 seconds with error 00:32:52.765 Verification LBA range: start 0x0 length 0x400 00:32:52.765 Nvme2n1 : 1.00 128.42 8.03 64.21 0.00 321835.24 45001.39 265639.25 00:32:52.765 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:52.765 Job: Nvme3n1 ended in about 1.00 seconds with error 00:32:52.765 Verification LBA range: start 0x0 length 0x400 00:32:52.765 Nvme3n1 : 1.00 196.03 12.25 64.01 0.00 233463.60 10431.15 269134.51 00:32:52.765 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:52.765 Job: Nvme4n1 ended in about 0.98 seconds with error 00:32:52.765 Verification LBA range: start 0x0 length 0x400 00:32:52.765 Nvme4n1 : 0.98 195.53 12.22 65.18 0.00 227546.99 9666.56 265639.25 00:32:52.765 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:52.765 Job: Nvme5n1 ended in about 0.99 seconds with error 00:32:52.765 Verification LBA range: start 0x0 length 0x400 00:32:52.765 Nvme5n1 : 0.99 192.04 12.00 2.02 0.00 298077.01 15837.87 291853.65 00:32:52.765 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:52.765 Job: Nvme6n1 ended in about 0.98 seconds with error 00:32:52.765 Verification LBA range: start 0x0 length 0x400 00:32:52.765 Nvme6n1 : 0.98 195.27 12.20 65.09 0.00 217831.25 10540.37 265639.25 00:32:52.765 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:52.765 Job: Nvme7n1 ended in about 0.98 seconds with error 00:32:52.765 Verification LBA range: start 0x0 length 0x400 00:32:52.765 Nvme7n1 : 0.98 195.89 12.24 65.30 0.00 212096.96 12451.84 265639.25 00:32:52.765 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:52.765 Job: Nvme8n1 ended in about 0.97 seconds with error 00:32:52.765 Verification LBA range: start 0x0 length 0x400 00:32:52.765 Nvme8n1 : 0.97 196.94 12.31 65.65 0.00 205752.37 6717.44 239424.85 00:32:52.765 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:52.765 Job: Nvme9n1 ended in about 1.00 seconds with error 00:32:52.765 Verification LBA range: start 0x0 length 0x400 00:32:52.765 Nvme9n1 : 1.00 127.61 7.98 63.81 0.00 277223.25 20097.71 284863.15 00:32:52.765 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:52.765 Job: Nvme10n1 ended in about 1.01 seconds with error 00:32:52.765 Verification LBA range: start 0x0 length 0x400 00:32:52.765 Nvme10n1 : 1.01 126.91 7.93 63.45 0.00 272567.75 19223.89 260396.37 00:32:52.765 [2024-10-07T12:43:16.474Z] =================================================================================================================== 00:32:52.765 [2024-10-07T12:43:16.474Z] Total : 1683.45 105.22 583.12 0.00 253632.38 6717.44 291853.65 00:32:53.027 [2024-10-07 14:43:16.469968] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:53.027 [2024-10-07 14:43:16.470027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:32:53.027 [2024-10-07 14:43:16.470540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.027 [2024-10-07 14:43:16.470566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:32:53.027 [2024-10-07 14:43:16.470582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:32:53.027 [2024-10-07 14:43:16.470947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.027 [2024-10-07 14:43:16.470963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:32:53.027 [2024-10-07 14:43:16.470977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039f100 is same with the state(6) to be set 00:32:53.027 [2024-10-07 14:43:16.471282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.027 [2024-10-07 14:43:16.471299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0280 with addr=10.0.0.2, port=4420 00:32:53.027 [2024-10-07 14:43:16.471310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0280 is same with the state(6) to be set 00:32:53.027 [2024-10-07 14:43:16.471344] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:53.027 [2024-10-07 14:43:16.471361] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:53.027 [2024-10-07 14:43:16.471373] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:53.027 [2024-10-07 14:43:16.471387] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:53.027 [2024-10-07 14:43:16.471401] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:53.027 [2024-10-07 14:43:16.471424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0280 (9): Bad file descriptor 00:32:53.027 [2024-10-07 14:43:16.471445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039f100 (9): Bad file descriptor 00:32:53.027 [2024-10-07 14:43:16.471463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:32:53.027 1683.45 IOPS, 105.22 MiB/s [2024-10-07T12:43:16.736Z] [2024-10-07 14:43:16.473726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:32:53.027 [2024-10-07 14:43:16.473758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:32:53.027 [2024-10-07 14:43:16.473771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:32:53.027 [2024-10-07 14:43:16.473783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:32:53.027 [2024-10-07 14:43:16.473795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:32:53.027 [2024-10-07 14:43:16.474220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.027 [2024-10-07 14:43:16.474243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a5c80 with addr=10.0.0.2, port=4420 00:32:53.027 [2024-10-07 14:43:16.474257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a5c80 is same with the state(6) to be set 00:32:53.027 [2024-10-07 14:43:16.474589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.027 [2024-10-07 14:43:16.474605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a6b80 with addr=10.0.0.2, port=4420 00:32:53.027 [2024-10-07 14:43:16.474615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a6b80 is same with the state(6) to be set 00:32:53.027 [2024-10-07 14:43:16.474648] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:53.027 [2024-10-07 14:43:16.474663] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:53.027 [2024-10-07 14:43:16.474677] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:53.027 [2024-10-07 14:43:16.475523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.027 [2024-10-07 14:43:16.475551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a2f80 with addr=10.0.0.2, port=4420 00:32:53.027 [2024-10-07 14:43:16.475563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a2f80 is same with the state(6) to be set 00:32:53.027 [2024-10-07 14:43:16.475913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.027 [2024-10-07 14:43:16.475928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a1180 with addr=10.0.0.2, port=4420 00:32:53.027 [2024-10-07 14:43:16.475939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a1180 is same with the state(6) to be set 00:32:53.027 [2024-10-07 14:43:16.476273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.027 [2024-10-07 14:43:16.476289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a3e80 with addr=10.0.0.2, port=4420 00:32:53.027 [2024-10-07 14:43:16.476299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a3e80 is same with the state(6) to be set 00:32:53.027 [2024-10-07 14:43:16.476622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.027 [2024-10-07 14:43:16.476637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a4d80 with addr=10.0.0.2, port=4420 00:32:53.027 [2024-10-07 14:43:16.476648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a4d80 is same with the state(6) to be set 00:32:53.027 [2024-10-07 14:43:16.477004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.027 [2024-10-07 14:43:16.477020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a2080 with addr=10.0.0.2, port=4420 00:32:53.027 [2024-10-07 14:43:16.477031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a2080 is same with the state(6) to be set 00:32:53.027 [2024-10-07 14:43:16.477045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a5c80 (9): Bad file descriptor 00:32:53.027 [2024-10-07 14:43:16.477061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a6b80 (9): Bad file descriptor 00:32:53.027 [2024-10-07 14:43:16.477073] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:53.027 [2024-10-07 14:43:16.477084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:53.027 [2024-10-07 14:43:16.477096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:53.027 [2024-10-07 14:43:16.477117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:32:53.027 [2024-10-07 14:43:16.477127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:32:53.027 [2024-10-07 14:43:16.477137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:32:53.027 [2024-10-07 14:43:16.477152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:32:53.027 [2024-10-07 14:43:16.477161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:32:53.027 [2024-10-07 14:43:16.477170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:32:53.027 [2024-10-07 14:43:16.477263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.027 [2024-10-07 14:43:16.477277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.027 [2024-10-07 14:43:16.477286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.027 [2024-10-07 14:43:16.477298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a2f80 (9): Bad file descriptor 00:32:53.027 [2024-10-07 14:43:16.477311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a1180 (9): Bad file descriptor 00:32:53.027 [2024-10-07 14:43:16.477325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3e80 (9): Bad file descriptor 00:32:53.027 [2024-10-07 14:43:16.477338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a4d80 (9): Bad file descriptor 00:32:53.027 [2024-10-07 14:43:16.477354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a2080 (9): Bad file descriptor 00:32:53.027 [2024-10-07 14:43:16.477366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:32:53.027 [2024-10-07 14:43:16.477375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:32:53.027 [2024-10-07 14:43:16.477385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:32:53.027 [2024-10-07 14:43:16.477399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:32:53.027 [2024-10-07 14:43:16.477410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:32:53.027 [2024-10-07 14:43:16.477420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:32:53.027 [2024-10-07 14:43:16.477461] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.027 [2024-10-07 14:43:16.477472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.027 [2024-10-07 14:43:16.477482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:32:53.027 [2024-10-07 14:43:16.477490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:32:53.027 [2024-10-07 14:43:16.477500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:32:53.027 [2024-10-07 14:43:16.477514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:32:53.027 [2024-10-07 14:43:16.477525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:32:53.027 [2024-10-07 14:43:16.477534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:32:53.027 [2024-10-07 14:43:16.477547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:32:53.027 [2024-10-07 14:43:16.477556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:32:53.027 [2024-10-07 14:43:16.477567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:32:53.027 [2024-10-07 14:43:16.477580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:32:53.027 [2024-10-07 14:43:16.477590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:32:53.027 [2024-10-07 14:43:16.477599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:32:53.027 [2024-10-07 14:43:16.477612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:32:53.028 [2024-10-07 14:43:16.477621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:32:53.028 [2024-10-07 14:43:16.477630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:32:53.028 [2024-10-07 14:43:16.477669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.028 [2024-10-07 14:43:16.477679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.028 [2024-10-07 14:43:16.477688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.028 [2024-10-07 14:43:16.477698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:53.028 [2024-10-07 14:43:16.477707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.541 14:43:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3168141 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3168141 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3168141 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:55.196 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:55.196 rmmod nvme_tcp 00:32:55.458 rmmod nvme_fabrics 00:32:55.458 rmmod nvme_keyring 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 3167761 ']' 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 3167761 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3167761 ']' 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3167761 00:32:55.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3167761) - No such process 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3167761 is not found' 00:32:55.458 Process with pid 3167761 is not found 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:55.458 14:43:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.374 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:57.374 00:32:57.374 real 0m10.114s 00:32:57.374 user 0m28.227s 00:32:57.374 sys 0m1.606s 00:32:57.374 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:57.374 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:32:57.374 ************************************ 00:32:57.374 END TEST nvmf_shutdown_tc3 00:32:57.374 ************************************ 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:57.635 ************************************ 00:32:57.635 START TEST nvmf_shutdown_tc4 00:32:57.635 ************************************ 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:57.635 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:57.636 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:57.636 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:57.636 Found net devices under 0000:31:00.0: cvl_0_0 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:57.636 Found net devices under 0000:31:00.1: cvl_0_1 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:57.636 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:57.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:57.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:32:57.897 00:32:57.897 --- 10.0.0.2 ping statistics --- 00:32:57.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.897 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:57.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:57.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:32:57.897 00:32:57.897 --- 10.0.0.1 ping statistics --- 00:32:57.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.897 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=3169944 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 3169944 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 3169944 ']' 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:57.897 14:43:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:58.158 [2024-10-07 14:43:21.611379] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:32:58.158 [2024-10-07 14:43:21.611502] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:58.158 [2024-10-07 14:43:21.758379] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:58.418 [2024-10-07 14:43:21.904372] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:58.418 [2024-10-07 14:43:21.904413] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:58.418 [2024-10-07 14:43:21.904422] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:58.418 [2024-10-07 14:43:21.904431] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:58.418 [2024-10-07 14:43:21.904437] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:58.418 [2024-10-07 14:43:21.906214] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:58.418 [2024-10-07 14:43:21.906454] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:32:58.418 [2024-10-07 14:43:21.906553] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.418 [2024-10-07 14:43:21.906573] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:32:58.679 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:58.679 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:32:58.679 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:58.679 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:58.679 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:58.940 [2024-10-07 14:43:22.419061] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.940 14:43:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:58.940 Malloc1 00:32:58.940 [2024-10-07 14:43:22.548695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.940 Malloc2 00:32:59.201 Malloc3 00:32:59.201 Malloc4 00:32:59.201 Malloc5 00:32:59.201 Malloc6 00:32:59.461 Malloc7 00:32:59.461 Malloc8 00:32:59.461 Malloc9 00:32:59.461 Malloc10 00:32:59.461 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.461 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:32:59.461 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:59.461 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:32:59.722 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3170331 00:32:59.722 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:32:59.722 14:43:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:32:59.722 [2024-10-07 14:43:23.295991] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:05.014 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:05.014 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3169944 00:33:05.014 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3169944 ']' 00:33:05.014 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3169944 00:33:05.014 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:33:05.014 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:05.014 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3169944 00:33:05.014 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:05.014 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:05.014 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3169944' 00:33:05.014 killing process with pid 3169944 00:33:05.014 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 3169944 00:33:05.014 14:43:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 3169944 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 starting I/O failed: -6 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 starting I/O failed: -6 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 starting I/O failed: -6 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 starting I/O failed: -6 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 starting I/O failed: -6 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 starting I/O failed: -6 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.014 starting I/O failed: -6 00:33:05.014 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 [2024-10-07 14:43:28.289961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 [2024-10-07 14:43:28.291488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 [2024-10-07 14:43:28.293442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.015 Write completed with error (sct=0, sc=8) 00:33:05.015 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 [2024-10-07 14:43:28.298487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:05.016 NVMe io qpair process completion error 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 [2024-10-07 14:43:28.300042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 [2024-10-07 14:43:28.301465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.016 starting I/O failed: -6 00:33:05.016 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 [2024-10-07 14:43:28.303371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 [2024-10-07 14:43:28.313235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.017 NVMe io qpair process completion error 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 [2024-10-07 14:43:28.315269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.017 starting I/O failed: -6 00:33:05.017 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 [2024-10-07 14:43:28.316668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 [2024-10-07 14:43:28.318608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.018 starting I/O failed: -6 00:33:05.018 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 [2024-10-07 14:43:28.328125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.019 NVMe io qpair process completion error 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 [2024-10-07 14:43:28.329599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.019 starting I/O failed: -6 00:33:05.019 starting I/O failed: -6 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 [2024-10-07 14:43:28.331227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.019 Write completed with error (sct=0, sc=8) 00:33:05.019 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 [2024-10-07 14:43:28.333136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 [2024-10-07 14:43:28.346225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.020 NVMe io qpair process completion error 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 [2024-10-07 14:43:28.347736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.020 starting I/O failed: -6 00:33:05.020 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 [2024-10-07 14:43:28.349132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 [2024-10-07 14:43:28.351026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.021 Write completed with error (sct=0, sc=8) 00:33:05.021 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 [2024-10-07 14:43:28.358151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.022 NVMe io qpair process completion error 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 [2024-10-07 14:43:28.359672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.022 starting I/O failed: -6 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 [2024-10-07 14:43:28.361274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 [2024-10-07 14:43:28.363156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.022 Write completed with error (sct=0, sc=8) 00:33:05.022 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 [2024-10-07 14:43:28.370282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.023 NVMe io qpair process completion error 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 [2024-10-07 14:43:28.371803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 [2024-10-07 14:43:28.373218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:05.023 starting I/O failed: -6 00:33:05.023 starting I/O failed: -6 00:33:05.023 starting I/O failed: -6 00:33:05.023 starting I/O failed: -6 00:33:05.023 starting I/O failed: -6 00:33:05.023 starting I/O failed: -6 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.023 Write completed with error (sct=0, sc=8) 00:33:05.023 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 [2024-10-07 14:43:28.375611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 [2024-10-07 14:43:28.385071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.024 NVMe io qpair process completion error 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 starting I/O failed: -6 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.024 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 [2024-10-07 14:43:28.386586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 [2024-10-07 14:43:28.387985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 [2024-10-07 14:43:28.389903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.025 Write completed with error (sct=0, sc=8) 00:33:05.025 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 [2024-10-07 14:43:28.399492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.026 NVMe io qpair process completion error 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 [2024-10-07 14:43:28.400948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:05.026 starting I/O failed: -6 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 [2024-10-07 14:43:28.402384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.026 Write completed with error (sct=0, sc=8) 00:33:05.026 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 [2024-10-07 14:43:28.404365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 [2024-10-07 14:43:28.413793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.027 NVMe io qpair process completion error 00:33:05.027 [2024-10-07 14:43:28.414528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000029e80 is same with the state(6) to be set 00:33:05.027 [2024-10-07 14:43:28.414742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000028800 is same with the state(6) to be set 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 [2024-10-07 14:43:28.414877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027900 is same with the state(6) to be set 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 [2024-10-07 14:43:28.415027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000028080 is same with the state(6) to be set 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 [2024-10-07 14:43:28.415159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000029700 is same with the state(6) to be set 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 [2024-10-07 14:43:28.415285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000028f80 is same with the state(6) to be set 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 [2024-10-07 14:43:28.415431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025b00 is same with the state(6) to be set 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.027 [2024-10-07 14:43:28.415562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026a00 is same with the state(6) to be set 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 Write completed with error (sct=0, sc=8) 00:33:05.027 starting I/O failed: -6 00:33:05.028 [2024-10-07 14:43:28.415701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027180 is same with the state(6) to be set 00:33:05.028 [2024-10-07 14:43:28.415738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:05.028 [2024-10-07 14:43:28.415827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026280 is same with the state(6) to be set 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 [2024-10-07 14:43:28.417358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 [2024-10-07 14:43:28.419242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.028 Write completed with error (sct=0, sc=8) 00:33:05.028 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 Write completed with error (sct=0, sc=8) 00:33:05.029 starting I/O failed: -6 00:33:05.029 [2024-10-07 14:43:28.431700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:05.029 NVMe io qpair process completion error 00:33:05.029 Initializing NVMe Controllers 00:33:05.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:33:05.029 Controller IO queue size 128, less than required. 00:33:05.029 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:05.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:33:05.029 Controller IO queue size 128, less than required. 00:33:05.029 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:05.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:33:05.029 Controller IO queue size 128, less than required. 00:33:05.029 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:05.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:33:05.029 Controller IO queue size 128, less than required. 00:33:05.029 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:05.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:33:05.029 Controller IO queue size 128, less than required. 00:33:05.029 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:05.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:33:05.029 Controller IO queue size 128, less than required. 00:33:05.029 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:05.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:05.029 Controller IO queue size 128, less than required. 00:33:05.029 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:05.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:33:05.029 Controller IO queue size 128, less than required. 00:33:05.029 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:05.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:33:05.029 Controller IO queue size 128, less than required. 00:33:05.029 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:05.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:33:05.029 Controller IO queue size 128, less than required. 00:33:05.029 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:05.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:33:05.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:33:05.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:33:05.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:33:05.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:33:05.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:33:05.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:05.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:33:05.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:33:05.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:33:05.029 Initialization complete. Launching workers. 00:33:05.029 ======================================================== 00:33:05.029 Latency(us) 00:33:05.029 Device Information : IOPS MiB/s Average min max 00:33:05.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1689.12 72.58 75797.91 1457.53 171781.71 00:33:05.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1655.15 71.12 77438.02 1208.71 180678.02 00:33:05.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1649.35 70.87 77817.40 910.50 159290.58 00:33:05.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1651.83 70.98 77805.47 1249.98 201046.12 00:33:05.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1673.17 71.89 76938.27 1081.99 182712.09 00:33:05.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1624.49 69.80 79378.66 972.39 195119.05 00:33:05.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1637.95 70.38 76575.41 1229.53 148261.92 00:33:05.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1645.20 70.69 76290.04 1442.83 142781.46 00:33:05.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1647.07 70.77 76341.55 1205.33 146124.13 00:33:05.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1635.26 70.27 77014.47 1280.45 136364.09 00:33:05.029 ======================================================== 00:33:05.029 Total : 16508.58 709.35 77133.85 910.50 201046.12 00:33:05.029 00:33:05.030 [2024-10-07 14:43:28.454202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029e80 (9): Bad file descriptor 00:33:05.030 [2024-10-07 14:43:28.454242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000028800 (9): Bad file descriptor 00:33:05.030 [2024-10-07 14:43:28.454261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000027900 (9): Bad file descriptor 00:33:05.030 [2024-10-07 14:43:28.454279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000028080 (9): Bad file descriptor 00:33:05.030 [2024-10-07 14:43:28.454296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029700 (9): Bad file descriptor 00:33:05.030 [2024-10-07 14:43:28.454312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000028f80 (9): Bad file descriptor 00:33:05.030 [2024-10-07 14:43:28.454331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000025b00 (9): Bad file descriptor 00:33:05.030 [2024-10-07 14:43:28.454348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000026a00 (9): Bad file descriptor 00:33:05.030 [2024-10-07 14:43:28.454366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000027180 (9): Bad file descriptor 00:33:05.030 [2024-10-07 14:43:28.454383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000026280 (9): Bad file descriptor 00:33:05.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:33:06.413 14:43:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3170331 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3170331 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3170331 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:07.356 rmmod nvme_tcp 00:33:07.356 rmmod nvme_fabrics 00:33:07.356 rmmod nvme_keyring 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 3169944 ']' 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 3169944 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3169944 ']' 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3169944 00:33:07.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3169944) - No such process 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3169944 is not found' 00:33:07.356 Process with pid 3169944 is not found 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.356 14:43:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.899 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:09.899 00:33:09.899 real 0m11.874s 00:33:09.899 user 0m33.026s 00:33:09.899 sys 0m3.975s 00:33:09.899 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:09.899 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:33:09.899 ************************************ 00:33:09.899 END TEST nvmf_shutdown_tc4 00:33:09.899 ************************************ 00:33:09.899 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:33:09.899 00:33:09.899 real 0m53.683s 00:33:09.899 user 2m24.518s 00:33:09.899 sys 0m14.839s 00:33:09.899 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:09.899 14:43:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:09.899 ************************************ 00:33:09.899 END TEST nvmf_shutdown 00:33:09.899 ************************************ 00:33:09.899 14:43:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:33:09.899 00:33:09.899 real 19m10.718s 00:33:09.899 user 50m1.003s 00:33:09.899 sys 4m28.224s 00:33:09.899 14:43:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:09.899 14:43:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:33:09.899 ************************************ 00:33:09.899 END TEST nvmf_target_extra 00:33:09.899 ************************************ 00:33:09.899 14:43:33 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:33:09.899 14:43:33 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:09.899 14:43:33 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:09.899 14:43:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:09.899 ************************************ 00:33:09.899 START TEST nvmf_host 00:33:09.899 ************************************ 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:33:09.899 * Looking for test storage... 00:33:09.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:09.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.899 --rc genhtml_branch_coverage=1 00:33:09.899 --rc genhtml_function_coverage=1 00:33:09.899 --rc genhtml_legend=1 00:33:09.899 --rc geninfo_all_blocks=1 00:33:09.899 --rc geninfo_unexecuted_blocks=1 00:33:09.899 00:33:09.899 ' 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:09.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.899 --rc genhtml_branch_coverage=1 00:33:09.899 --rc genhtml_function_coverage=1 00:33:09.899 --rc genhtml_legend=1 00:33:09.899 --rc geninfo_all_blocks=1 00:33:09.899 --rc geninfo_unexecuted_blocks=1 00:33:09.899 00:33:09.899 ' 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:09.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.899 --rc genhtml_branch_coverage=1 00:33:09.899 --rc genhtml_function_coverage=1 00:33:09.899 --rc genhtml_legend=1 00:33:09.899 --rc geninfo_all_blocks=1 00:33:09.899 --rc geninfo_unexecuted_blocks=1 00:33:09.899 00:33:09.899 ' 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:09.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.899 --rc genhtml_branch_coverage=1 00:33:09.899 --rc genhtml_function_coverage=1 00:33:09.899 --rc genhtml_legend=1 00:33:09.899 --rc geninfo_all_blocks=1 00:33:09.899 --rc geninfo_unexecuted_blocks=1 00:33:09.899 00:33:09.899 ' 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.899 14:43:33 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:09.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.900 ************************************ 00:33:09.900 START TEST nvmf_multicontroller 00:33:09.900 ************************************ 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:33:09.900 * Looking for test storage... 00:33:09.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:33:09.900 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:10.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.161 --rc genhtml_branch_coverage=1 00:33:10.161 --rc genhtml_function_coverage=1 00:33:10.161 --rc genhtml_legend=1 00:33:10.161 --rc geninfo_all_blocks=1 00:33:10.161 --rc geninfo_unexecuted_blocks=1 00:33:10.161 00:33:10.161 ' 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:10.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.161 --rc genhtml_branch_coverage=1 00:33:10.161 --rc genhtml_function_coverage=1 00:33:10.161 --rc genhtml_legend=1 00:33:10.161 --rc geninfo_all_blocks=1 00:33:10.161 --rc geninfo_unexecuted_blocks=1 00:33:10.161 00:33:10.161 ' 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:10.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.161 --rc genhtml_branch_coverage=1 00:33:10.161 --rc genhtml_function_coverage=1 00:33:10.161 --rc genhtml_legend=1 00:33:10.161 --rc geninfo_all_blocks=1 00:33:10.161 --rc geninfo_unexecuted_blocks=1 00:33:10.161 00:33:10.161 ' 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:10.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.161 --rc genhtml_branch_coverage=1 00:33:10.161 --rc genhtml_function_coverage=1 00:33:10.161 --rc genhtml_legend=1 00:33:10.161 --rc geninfo_all_blocks=1 00:33:10.161 --rc geninfo_unexecuted_blocks=1 00:33:10.161 00:33:10.161 ' 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:10.161 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:10.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:33:10.162 14:43:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:18.306 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:18.306 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:18.307 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:18.307 Found net devices under 0000:31:00.0: cvl_0_0 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:18.307 Found net devices under 0000:31:00.1: cvl_0_1 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:18.307 14:43:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:18.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:18.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:33:18.307 00:33:18.307 --- 10.0.0.2 ping statistics --- 00:33:18.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.307 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:18.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:18.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:33:18.307 00:33:18.307 --- 10.0.0.1 ping statistics --- 00:33:18.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.307 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=3176130 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 3176130 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3176130 ']' 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:18.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.307 [2024-10-07 14:43:41.192007] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:33:18.307 [2024-10-07 14:43:41.192113] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:18.307 [2024-10-07 14:43:41.337985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:18.307 [2024-10-07 14:43:41.560692] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:18.307 [2024-10-07 14:43:41.560773] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:18.307 [2024-10-07 14:43:41.560786] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:18.307 [2024-10-07 14:43:41.560800] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:18.307 [2024-10-07 14:43:41.560810] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:18.307 [2024-10-07 14:43:41.562873] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:33:18.307 [2024-10-07 14:43:41.563008] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.307 [2024-10-07 14:43:41.563043] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.307 [2024-10-07 14:43:41.985299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:18.307 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.308 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:18.308 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.308 14:43:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.570 Malloc0 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.570 [2024-10-07 14:43:42.081250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.570 [2024-10-07 14:43:42.093157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.570 Malloc1 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3176228 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3176228 /var/tmp/bdevperf.sock 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3176228 ']' 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:18.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:18.570 14:43:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:19.514 NVMe0n1 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.514 1 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:33:19.514 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:19.515 request: 00:33:19.515 { 00:33:19.515 "name": "NVMe0", 00:33:19.515 "trtype": "tcp", 00:33:19.515 "traddr": "10.0.0.2", 00:33:19.515 "adrfam": "ipv4", 00:33:19.515 "trsvcid": "4420", 00:33:19.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:19.515 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:33:19.515 "hostaddr": "10.0.0.1", 00:33:19.515 "prchk_reftag": false, 00:33:19.515 "prchk_guard": false, 00:33:19.515 "hdgst": false, 00:33:19.515 "ddgst": false, 00:33:19.515 "allow_unrecognized_csi": false, 00:33:19.515 "method": "bdev_nvme_attach_controller", 00:33:19.515 "req_id": 1 00:33:19.515 } 00:33:19.515 Got JSON-RPC error response 00:33:19.515 response: 00:33:19.515 { 00:33:19.515 "code": -114, 00:33:19.515 "message": "A controller named NVMe0 already exists with the specified network path" 00:33:19.515 } 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:19.515 request: 00:33:19.515 { 00:33:19.515 "name": "NVMe0", 00:33:19.515 "trtype": "tcp", 00:33:19.515 "traddr": "10.0.0.2", 00:33:19.515 "adrfam": "ipv4", 00:33:19.515 "trsvcid": "4420", 00:33:19.515 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:19.515 "hostaddr": "10.0.0.1", 00:33:19.515 "prchk_reftag": false, 00:33:19.515 "prchk_guard": false, 00:33:19.515 "hdgst": false, 00:33:19.515 "ddgst": false, 00:33:19.515 "allow_unrecognized_csi": false, 00:33:19.515 "method": "bdev_nvme_attach_controller", 00:33:19.515 "req_id": 1 00:33:19.515 } 00:33:19.515 Got JSON-RPC error response 00:33:19.515 response: 00:33:19.515 { 00:33:19.515 "code": -114, 00:33:19.515 "message": "A controller named NVMe0 already exists with the specified network path" 00:33:19.515 } 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:19.515 request: 00:33:19.515 { 00:33:19.515 "name": "NVMe0", 00:33:19.515 "trtype": "tcp", 00:33:19.515 "traddr": "10.0.0.2", 00:33:19.515 "adrfam": "ipv4", 00:33:19.515 "trsvcid": "4420", 00:33:19.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:19.515 "hostaddr": "10.0.0.1", 00:33:19.515 "prchk_reftag": false, 00:33:19.515 "prchk_guard": false, 00:33:19.515 "hdgst": false, 00:33:19.515 "ddgst": false, 00:33:19.515 "multipath": "disable", 00:33:19.515 "allow_unrecognized_csi": false, 00:33:19.515 "method": "bdev_nvme_attach_controller", 00:33:19.515 "req_id": 1 00:33:19.515 } 00:33:19.515 Got JSON-RPC error response 00:33:19.515 response: 00:33:19.515 { 00:33:19.515 "code": -114, 00:33:19.515 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:33:19.515 } 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.515 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:19.777 request: 00:33:19.777 { 00:33:19.777 "name": "NVMe0", 00:33:19.777 "trtype": "tcp", 00:33:19.777 "traddr": "10.0.0.2", 00:33:19.777 "adrfam": "ipv4", 00:33:19.777 "trsvcid": "4420", 00:33:19.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:19.777 "hostaddr": "10.0.0.1", 00:33:19.777 "prchk_reftag": false, 00:33:19.777 "prchk_guard": false, 00:33:19.777 "hdgst": false, 00:33:19.777 "ddgst": false, 00:33:19.777 "multipath": "failover", 00:33:19.777 "allow_unrecognized_csi": false, 00:33:19.777 "method": "bdev_nvme_attach_controller", 00:33:19.777 "req_id": 1 00:33:19.777 } 00:33:19.777 Got JSON-RPC error response 00:33:19.777 response: 00:33:19.777 { 00:33:19.777 "code": -114, 00:33:19.777 "message": "A controller named NVMe0 already exists with the specified network path" 00:33:19.777 } 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:19.777 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:19.777 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.777 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:20.038 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.038 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:33:20.038 14:43:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:20.982 { 00:33:20.982 "results": [ 00:33:20.982 { 00:33:20.982 "job": "NVMe0n1", 00:33:20.982 "core_mask": "0x1", 00:33:20.982 "workload": "write", 00:33:20.982 "status": "finished", 00:33:20.982 "queue_depth": 128, 00:33:20.982 "io_size": 4096, 00:33:20.982 "runtime": 1.004303, 00:33:20.982 "iops": 18426.709867440404, 00:33:20.982 "mibps": 71.97933541968908, 00:33:20.982 "io_failed": 0, 00:33:20.982 "io_timeout": 0, 00:33:20.982 "avg_latency_us": 6933.809108397277, 00:33:20.982 "min_latency_us": 2757.9733333333334, 00:33:20.982 "max_latency_us": 16820.906666666666 00:33:20.982 } 00:33:20.982 ], 00:33:20.982 "core_count": 1 00:33:20.982 } 00:33:20.982 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:33:20.982 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.982 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:20.982 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.982 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:33:20.982 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3176228 00:33:20.982 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3176228 ']' 00:33:20.982 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3176228 00:33:20.982 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:33:20.982 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:20.982 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3176228 00:33:21.243 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:21.243 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:21.243 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3176228' 00:33:21.243 killing process with pid 3176228 00:33:21.243 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3176228 00:33:21.243 14:43:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3176228 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:33:21.815 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:33:21.815 [2024-10-07 14:43:42.282301] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:33:21.815 [2024-10-07 14:43:42.282413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3176228 ] 00:33:21.815 [2024-10-07 14:43:42.399547] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.815 [2024-10-07 14:43:42.579786] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.815 [2024-10-07 14:43:43.470657] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name a020d375-0dfe-494b-a536-636ff099b19f already exists 00:33:21.815 [2024-10-07 14:43:43.470699] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:a020d375-0dfe-494b-a536-636ff099b19f alias for bdev NVMe1n1 00:33:21.815 [2024-10-07 14:43:43.470714] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:33:21.815 Running I/O for 1 seconds... 00:33:21.815 18378.00 IOPS, 71.79 MiB/s 00:33:21.815 Latency(us) 00:33:21.815 [2024-10-07T12:43:45.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.815 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:33:21.815 NVMe0n1 : 1.00 18426.71 71.98 0.00 0.00 6933.81 2757.97 16820.91 00:33:21.815 [2024-10-07T12:43:45.524Z] =================================================================================================================== 00:33:21.815 [2024-10-07T12:43:45.524Z] Total : 18426.71 71.98 0.00 0.00 6933.81 2757.97 16820.91 00:33:21.815 Received shutdown signal, test time was about 1.000000 seconds 00:33:21.815 00:33:21.815 Latency(us) 00:33:21.815 [2024-10-07T12:43:45.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.815 [2024-10-07T12:43:45.524Z] =================================================================================================================== 00:33:21.815 [2024-10-07T12:43:45.524Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:21.815 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:21.815 rmmod nvme_tcp 00:33:21.815 rmmod nvme_fabrics 00:33:21.815 rmmod nvme_keyring 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 3176130 ']' 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 3176130 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3176130 ']' 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3176130 00:33:21.815 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:33:22.077 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:22.077 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3176130 00:33:22.077 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:22.077 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:22.077 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3176130' 00:33:22.077 killing process with pid 3176130 00:33:22.077 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3176130 00:33:22.077 14:43:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3176130 00:33:23.019 14:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:23.019 14:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:23.019 14:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:23.019 14:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:33:23.019 14:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:23.019 14:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:33:23.019 14:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:33:23.019 14:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:23.019 14:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:23.019 14:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.019 14:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:23.019 14:43:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.933 14:43:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:24.933 00:33:24.933 real 0m15.038s 00:33:24.933 user 0m19.738s 00:33:24.933 sys 0m6.596s 00:33:24.933 14:43:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:24.933 14:43:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:33:24.933 ************************************ 00:33:24.933 END TEST nvmf_multicontroller 00:33:24.933 ************************************ 00:33:24.933 14:43:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:33:24.933 14:43:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:24.933 14:43:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:24.933 14:43:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.933 ************************************ 00:33:24.933 START TEST nvmf_aer 00:33:24.933 ************************************ 00:33:24.933 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:33:25.195 * Looking for test storage... 00:33:25.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:25.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.195 --rc genhtml_branch_coverage=1 00:33:25.195 --rc genhtml_function_coverage=1 00:33:25.195 --rc genhtml_legend=1 00:33:25.195 --rc geninfo_all_blocks=1 00:33:25.195 --rc geninfo_unexecuted_blocks=1 00:33:25.195 00:33:25.195 ' 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:25.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.195 --rc genhtml_branch_coverage=1 00:33:25.195 --rc genhtml_function_coverage=1 00:33:25.195 --rc genhtml_legend=1 00:33:25.195 --rc geninfo_all_blocks=1 00:33:25.195 --rc geninfo_unexecuted_blocks=1 00:33:25.195 00:33:25.195 ' 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:25.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.195 --rc genhtml_branch_coverage=1 00:33:25.195 --rc genhtml_function_coverage=1 00:33:25.195 --rc genhtml_legend=1 00:33:25.195 --rc geninfo_all_blocks=1 00:33:25.195 --rc geninfo_unexecuted_blocks=1 00:33:25.195 00:33:25.195 ' 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:25.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.195 --rc genhtml_branch_coverage=1 00:33:25.195 --rc genhtml_function_coverage=1 00:33:25.195 --rc genhtml_legend=1 00:33:25.195 --rc geninfo_all_blocks=1 00:33:25.195 --rc geninfo_unexecuted_blocks=1 00:33:25.195 00:33:25.195 ' 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:25.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:33:25.195 14:43:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:33.347 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:33.347 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:33.347 Found net devices under 0000:31:00.0: cvl_0_0 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:33.347 Found net devices under 0000:31:00.1: cvl_0_1 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:33.347 14:43:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:33.347 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:33.347 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:33.347 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:33.347 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:33.347 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:33.347 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:33.347 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:33.347 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:33.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:33.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.543 ms 00:33:33.348 00:33:33.348 --- 10.0.0.2 ping statistics --- 00:33:33.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.348 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:33.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:33.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:33:33.348 00:33:33.348 --- 10.0.0.1 ping statistics --- 00:33:33.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.348 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=3181240 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 3181240 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3181240 ']' 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:33.348 14:43:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:33.348 [2024-10-07 14:43:56.377569] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:33:33.348 [2024-10-07 14:43:56.377676] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:33.348 [2024-10-07 14:43:56.503473] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:33.348 [2024-10-07 14:43:56.685022] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.348 [2024-10-07 14:43:56.685073] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.348 [2024-10-07 14:43:56.685084] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.348 [2024-10-07 14:43:56.685095] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.348 [2024-10-07 14:43:56.685105] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.348 [2024-10-07 14:43:56.687326] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.348 [2024-10-07 14:43:56.687488] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:33:33.348 [2024-10-07 14:43:56.687642] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.348 [2024-10-07 14:43:56.687655] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:33:33.609 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:33.609 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:33:33.609 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:33.609 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:33.609 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:33.609 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:33.609 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:33.609 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.609 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:33.609 [2024-10-07 14:43:57.257591] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.609 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.609 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:33:33.609 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.609 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:33.870 Malloc0 00:33:33.870 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.870 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:33.871 [2024-10-07 14:43:57.355990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:33.871 [ 00:33:33.871 { 00:33:33.871 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:33.871 "subtype": "Discovery", 00:33:33.871 "listen_addresses": [], 00:33:33.871 "allow_any_host": true, 00:33:33.871 "hosts": [] 00:33:33.871 }, 00:33:33.871 { 00:33:33.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:33.871 "subtype": "NVMe", 00:33:33.871 "listen_addresses": [ 00:33:33.871 { 00:33:33.871 "trtype": "TCP", 00:33:33.871 "adrfam": "IPv4", 00:33:33.871 "traddr": "10.0.0.2", 00:33:33.871 "trsvcid": "4420" 00:33:33.871 } 00:33:33.871 ], 00:33:33.871 "allow_any_host": true, 00:33:33.871 "hosts": [], 00:33:33.871 "serial_number": "SPDK00000000000001", 00:33:33.871 "model_number": "SPDK bdev Controller", 00:33:33.871 "max_namespaces": 2, 00:33:33.871 "min_cntlid": 1, 00:33:33.871 "max_cntlid": 65519, 00:33:33.871 "namespaces": [ 00:33:33.871 { 00:33:33.871 "nsid": 1, 00:33:33.871 "bdev_name": "Malloc0", 00:33:33.871 "name": "Malloc0", 00:33:33.871 "nguid": "40E8E2A0DF414CE38EAF9CF5D7CDD14C", 00:33:33.871 "uuid": "40e8e2a0-df41-4ce3-8eaf-9cf5d7cdd14c" 00:33:33.871 } 00:33:33.871 ] 00:33:33.871 } 00:33:33.871 ] 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3181591 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:33:33.871 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:34.132 Malloc1 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:34.132 [ 00:33:34.132 { 00:33:34.132 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:34.132 "subtype": "Discovery", 00:33:34.132 "listen_addresses": [], 00:33:34.132 "allow_any_host": true, 00:33:34.132 "hosts": [] 00:33:34.132 }, 00:33:34.132 { 00:33:34.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:34.132 "subtype": "NVMe", 00:33:34.132 "listen_addresses": [ 00:33:34.132 { 00:33:34.132 "trtype": "TCP", 00:33:34.132 "adrfam": "IPv4", 00:33:34.132 "traddr": "10.0.0.2", 00:33:34.132 "trsvcid": "4420" 00:33:34.132 } 00:33:34.132 ], 00:33:34.132 "allow_any_host": true, 00:33:34.132 "hosts": [], 00:33:34.132 "serial_number": "SPDK00000000000001", 00:33:34.132 "model_number": "SPDK bdev Controller", 00:33:34.132 "max_namespaces": 2, 00:33:34.132 "min_cntlid": 1, 00:33:34.132 "max_cntlid": 65519, 00:33:34.132 "namespaces": [ 00:33:34.132 { 00:33:34.132 "nsid": 1, 00:33:34.132 "bdev_name": "Malloc0", 00:33:34.132 "name": "Malloc0", 00:33:34.132 "nguid": "40E8E2A0DF414CE38EAF9CF5D7CDD14C", 00:33:34.132 "uuid": "40e8e2a0-df41-4ce3-8eaf-9cf5d7cdd14c" 00:33:34.132 }, 00:33:34.132 { 00:33:34.132 "nsid": 2, 00:33:34.132 "bdev_name": "Malloc1", 00:33:34.132 "name": "Malloc1", 00:33:34.132 "nguid": "94B4E16B03374AC484DC5BBF3CB8E8B8", 00:33:34.132 "uuid": "94b4e16b-0337-4ac4-84dc-5bbf3cb8e8b8" 00:33:34.132 } 00:33:34.132 ] 00:33:34.132 } 00:33:34.132 ] 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.132 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3181591 00:33:34.393 Asynchronous Event Request test 00:33:34.393 Attaching to 10.0.0.2 00:33:34.393 Attached to 10.0.0.2 00:33:34.393 Registering asynchronous event callbacks... 00:33:34.393 Starting namespace attribute notice tests for all controllers... 00:33:34.393 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:33:34.393 aer_cb - Changed Namespace 00:33:34.393 Cleaning up... 00:33:34.393 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:33:34.393 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.393 14:43:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:34.393 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.393 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:33:34.393 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.393 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:34.655 rmmod nvme_tcp 00:33:34.655 rmmod nvme_fabrics 00:33:34.655 rmmod nvme_keyring 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 3181240 ']' 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 3181240 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3181240 ']' 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3181240 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3181240 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3181240' 00:33:34.655 killing process with pid 3181240 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3181240 00:33:34.655 14:43:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3181240 00:33:35.598 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:35.598 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:35.598 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:35.598 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:33:35.598 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:33:35.598 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:35.598 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:33:35.598 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:35.598 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:35.598 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.598 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:35.598 14:43:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.143 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:38.143 00:33:38.143 real 0m12.672s 00:33:38.143 user 0m11.163s 00:33:38.143 sys 0m6.251s 00:33:38.143 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:38.143 14:44:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:33:38.143 ************************************ 00:33:38.143 END TEST nvmf_aer 00:33:38.143 ************************************ 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.144 ************************************ 00:33:38.144 START TEST nvmf_async_init 00:33:38.144 ************************************ 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:33:38.144 * Looking for test storage... 00:33:38.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:38.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.144 --rc genhtml_branch_coverage=1 00:33:38.144 --rc genhtml_function_coverage=1 00:33:38.144 --rc genhtml_legend=1 00:33:38.144 --rc geninfo_all_blocks=1 00:33:38.144 --rc geninfo_unexecuted_blocks=1 00:33:38.144 00:33:38.144 ' 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:38.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.144 --rc genhtml_branch_coverage=1 00:33:38.144 --rc genhtml_function_coverage=1 00:33:38.144 --rc genhtml_legend=1 00:33:38.144 --rc geninfo_all_blocks=1 00:33:38.144 --rc geninfo_unexecuted_blocks=1 00:33:38.144 00:33:38.144 ' 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:38.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.144 --rc genhtml_branch_coverage=1 00:33:38.144 --rc genhtml_function_coverage=1 00:33:38.144 --rc genhtml_legend=1 00:33:38.144 --rc geninfo_all_blocks=1 00:33:38.144 --rc geninfo_unexecuted_blocks=1 00:33:38.144 00:33:38.144 ' 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:38.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.144 --rc genhtml_branch_coverage=1 00:33:38.144 --rc genhtml_function_coverage=1 00:33:38.144 --rc genhtml_legend=1 00:33:38.144 --rc geninfo_all_blocks=1 00:33:38.144 --rc geninfo_unexecuted_blocks=1 00:33:38.144 00:33:38.144 ' 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.144 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:38.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8fe43834490d49a7ae2651024d6f9226 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:33:38.145 14:44:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:44.747 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:44.748 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:44.748 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:44.748 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:44.749 Found net devices under 0000:31:00.0: cvl_0_0 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:44.749 Found net devices under 0000:31:00.1: cvl_0_1 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:44.749 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:44.750 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:44.750 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:44.750 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:44.750 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:44.750 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:45.017 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:45.017 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:45.017 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:45.017 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:45.017 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:45.017 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:45.017 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:45.017 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:45.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:45.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:33:45.017 00:33:45.017 --- 10.0.0.2 ping statistics --- 00:33:45.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.017 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:33:45.017 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:45.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:45.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:33:45.017 00:33:45.017 --- 10.0.0.1 ping statistics --- 00:33:45.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.017 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:33:45.017 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:45.017 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:33:45.017 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:45.017 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:45.017 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:45.017 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:45.017 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:45.017 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:45.017 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:45.278 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:33:45.278 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:45.278 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:45.278 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:45.278 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=3185997 00:33:45.278 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 3185997 00:33:45.278 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:33:45.278 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3185997 ']' 00:33:45.278 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:45.278 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:45.278 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:45.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:45.278 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:45.278 14:44:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:45.278 [2024-10-07 14:44:08.825857] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:33:45.278 [2024-10-07 14:44:08.825967] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:45.278 [2024-10-07 14:44:08.967817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.539 [2024-10-07 14:44:09.149160] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:45.539 [2024-10-07 14:44:09.149211] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:45.539 [2024-10-07 14:44:09.149223] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:45.539 [2024-10-07 14:44:09.149235] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:45.539 [2024-10-07 14:44:09.149245] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:45.539 [2024-10-07 14:44:09.150474] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.111 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:46.111 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:33:46.111 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:46.111 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:46.111 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.111 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.112 [2024-10-07 14:44:09.644541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.112 null0 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8fe43834490d49a7ae2651024d6f9226 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.112 [2024-10-07 14:44:09.684811] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.112 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.373 nvme0n1 00:33:46.373 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.373 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:33:46.373 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.373 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.373 [ 00:33:46.373 { 00:33:46.373 "name": "nvme0n1", 00:33:46.373 "aliases": [ 00:33:46.373 "8fe43834-490d-49a7-ae26-51024d6f9226" 00:33:46.373 ], 00:33:46.373 "product_name": "NVMe disk", 00:33:46.373 "block_size": 512, 00:33:46.373 "num_blocks": 2097152, 00:33:46.373 "uuid": "8fe43834-490d-49a7-ae26-51024d6f9226", 00:33:46.373 "numa_id": 0, 00:33:46.373 "assigned_rate_limits": { 00:33:46.373 "rw_ios_per_sec": 0, 00:33:46.373 "rw_mbytes_per_sec": 0, 00:33:46.373 "r_mbytes_per_sec": 0, 00:33:46.373 "w_mbytes_per_sec": 0 00:33:46.373 }, 00:33:46.373 "claimed": false, 00:33:46.373 "zoned": false, 00:33:46.373 "supported_io_types": { 00:33:46.373 "read": true, 00:33:46.373 "write": true, 00:33:46.373 "unmap": false, 00:33:46.373 "flush": true, 00:33:46.373 "reset": true, 00:33:46.373 "nvme_admin": true, 00:33:46.373 "nvme_io": true, 00:33:46.373 "nvme_io_md": false, 00:33:46.373 "write_zeroes": true, 00:33:46.373 "zcopy": false, 00:33:46.373 "get_zone_info": false, 00:33:46.373 "zone_management": false, 00:33:46.373 "zone_append": false, 00:33:46.373 "compare": true, 00:33:46.373 "compare_and_write": true, 00:33:46.373 "abort": true, 00:33:46.373 "seek_hole": false, 00:33:46.373 "seek_data": false, 00:33:46.373 "copy": true, 00:33:46.373 "nvme_iov_md": false 00:33:46.373 }, 00:33:46.373 "memory_domains": [ 00:33:46.373 { 00:33:46.373 "dma_device_id": "system", 00:33:46.373 "dma_device_type": 1 00:33:46.373 } 00:33:46.373 ], 00:33:46.373 "driver_specific": { 00:33:46.373 "nvme": [ 00:33:46.373 { 00:33:46.373 "trid": { 00:33:46.373 "trtype": "TCP", 00:33:46.373 "adrfam": "IPv4", 00:33:46.373 "traddr": "10.0.0.2", 00:33:46.373 "trsvcid": "4420", 00:33:46.373 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:46.373 }, 00:33:46.373 "ctrlr_data": { 00:33:46.373 "cntlid": 1, 00:33:46.373 "vendor_id": "0x8086", 00:33:46.373 "model_number": "SPDK bdev Controller", 00:33:46.373 "serial_number": "00000000000000000000", 00:33:46.373 "firmware_revision": "25.01", 00:33:46.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:46.373 "oacs": { 00:33:46.373 "security": 0, 00:33:46.373 "format": 0, 00:33:46.373 "firmware": 0, 00:33:46.373 "ns_manage": 0 00:33:46.373 }, 00:33:46.373 "multi_ctrlr": true, 00:33:46.373 "ana_reporting": false 00:33:46.373 }, 00:33:46.373 "vs": { 00:33:46.373 "nvme_version": "1.3" 00:33:46.373 }, 00:33:46.373 "ns_data": { 00:33:46.373 "id": 1, 00:33:46.373 "can_share": true 00:33:46.373 } 00:33:46.373 } 00:33:46.373 ], 00:33:46.373 "mp_policy": "active_passive" 00:33:46.373 } 00:33:46.373 } 00:33:46.373 ] 00:33:46.373 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.373 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:33:46.373 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.373 14:44:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.373 [2024-10-07 14:44:09.945781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:46.373 [2024-10-07 14:44:09.945873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:33:46.634 [2024-10-07 14:44:10.090190] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.634 [ 00:33:46.634 { 00:33:46.634 "name": "nvme0n1", 00:33:46.634 "aliases": [ 00:33:46.634 "8fe43834-490d-49a7-ae26-51024d6f9226" 00:33:46.634 ], 00:33:46.634 "product_name": "NVMe disk", 00:33:46.634 "block_size": 512, 00:33:46.634 "num_blocks": 2097152, 00:33:46.634 "uuid": "8fe43834-490d-49a7-ae26-51024d6f9226", 00:33:46.634 "numa_id": 0, 00:33:46.634 "assigned_rate_limits": { 00:33:46.634 "rw_ios_per_sec": 0, 00:33:46.634 "rw_mbytes_per_sec": 0, 00:33:46.634 "r_mbytes_per_sec": 0, 00:33:46.634 "w_mbytes_per_sec": 0 00:33:46.634 }, 00:33:46.634 "claimed": false, 00:33:46.634 "zoned": false, 00:33:46.634 "supported_io_types": { 00:33:46.634 "read": true, 00:33:46.634 "write": true, 00:33:46.634 "unmap": false, 00:33:46.634 "flush": true, 00:33:46.634 "reset": true, 00:33:46.634 "nvme_admin": true, 00:33:46.634 "nvme_io": true, 00:33:46.634 "nvme_io_md": false, 00:33:46.634 "write_zeroes": true, 00:33:46.634 "zcopy": false, 00:33:46.634 "get_zone_info": false, 00:33:46.634 "zone_management": false, 00:33:46.634 "zone_append": false, 00:33:46.634 "compare": true, 00:33:46.634 "compare_and_write": true, 00:33:46.634 "abort": true, 00:33:46.634 "seek_hole": false, 00:33:46.634 "seek_data": false, 00:33:46.634 "copy": true, 00:33:46.634 "nvme_iov_md": false 00:33:46.634 }, 00:33:46.634 "memory_domains": [ 00:33:46.634 { 00:33:46.634 "dma_device_id": "system", 00:33:46.634 "dma_device_type": 1 00:33:46.634 } 00:33:46.634 ], 00:33:46.634 "driver_specific": { 00:33:46.634 "nvme": [ 00:33:46.634 { 00:33:46.634 "trid": { 00:33:46.634 "trtype": "TCP", 00:33:46.634 "adrfam": "IPv4", 00:33:46.634 "traddr": "10.0.0.2", 00:33:46.634 "trsvcid": "4420", 00:33:46.634 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:46.634 }, 00:33:46.634 "ctrlr_data": { 00:33:46.634 "cntlid": 2, 00:33:46.634 "vendor_id": "0x8086", 00:33:46.634 "model_number": "SPDK bdev Controller", 00:33:46.634 "serial_number": "00000000000000000000", 00:33:46.634 "firmware_revision": "25.01", 00:33:46.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:46.634 "oacs": { 00:33:46.634 "security": 0, 00:33:46.634 "format": 0, 00:33:46.634 "firmware": 0, 00:33:46.634 "ns_manage": 0 00:33:46.634 }, 00:33:46.634 "multi_ctrlr": true, 00:33:46.634 "ana_reporting": false 00:33:46.634 }, 00:33:46.634 "vs": { 00:33:46.634 "nvme_version": "1.3" 00:33:46.634 }, 00:33:46.634 "ns_data": { 00:33:46.634 "id": 1, 00:33:46.634 "can_share": true 00:33:46.634 } 00:33:46.634 } 00:33:46.634 ], 00:33:46.634 "mp_policy": "active_passive" 00:33:46.634 } 00:33:46.634 } 00:33:46.634 ] 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Mg96QjtNMe 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Mg96QjtNMe 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.Mg96QjtNMe 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.634 [2024-10-07 14:44:10.154478] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:46.634 [2024-10-07 14:44:10.154734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:33:46.634 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.635 [2024-10-07 14:44:10.174551] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:46.635 nvme0n1 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.635 [ 00:33:46.635 { 00:33:46.635 "name": "nvme0n1", 00:33:46.635 "aliases": [ 00:33:46.635 "8fe43834-490d-49a7-ae26-51024d6f9226" 00:33:46.635 ], 00:33:46.635 "product_name": "NVMe disk", 00:33:46.635 "block_size": 512, 00:33:46.635 "num_blocks": 2097152, 00:33:46.635 "uuid": "8fe43834-490d-49a7-ae26-51024d6f9226", 00:33:46.635 "numa_id": 0, 00:33:46.635 "assigned_rate_limits": { 00:33:46.635 "rw_ios_per_sec": 0, 00:33:46.635 "rw_mbytes_per_sec": 0, 00:33:46.635 "r_mbytes_per_sec": 0, 00:33:46.635 "w_mbytes_per_sec": 0 00:33:46.635 }, 00:33:46.635 "claimed": false, 00:33:46.635 "zoned": false, 00:33:46.635 "supported_io_types": { 00:33:46.635 "read": true, 00:33:46.635 "write": true, 00:33:46.635 "unmap": false, 00:33:46.635 "flush": true, 00:33:46.635 "reset": true, 00:33:46.635 "nvme_admin": true, 00:33:46.635 "nvme_io": true, 00:33:46.635 "nvme_io_md": false, 00:33:46.635 "write_zeroes": true, 00:33:46.635 "zcopy": false, 00:33:46.635 "get_zone_info": false, 00:33:46.635 "zone_management": false, 00:33:46.635 "zone_append": false, 00:33:46.635 "compare": true, 00:33:46.635 "compare_and_write": true, 00:33:46.635 "abort": true, 00:33:46.635 "seek_hole": false, 00:33:46.635 "seek_data": false, 00:33:46.635 "copy": true, 00:33:46.635 "nvme_iov_md": false 00:33:46.635 }, 00:33:46.635 "memory_domains": [ 00:33:46.635 { 00:33:46.635 "dma_device_id": "system", 00:33:46.635 "dma_device_type": 1 00:33:46.635 } 00:33:46.635 ], 00:33:46.635 "driver_specific": { 00:33:46.635 "nvme": [ 00:33:46.635 { 00:33:46.635 "trid": { 00:33:46.635 "trtype": "TCP", 00:33:46.635 "adrfam": "IPv4", 00:33:46.635 "traddr": "10.0.0.2", 00:33:46.635 "trsvcid": "4421", 00:33:46.635 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:46.635 }, 00:33:46.635 "ctrlr_data": { 00:33:46.635 "cntlid": 3, 00:33:46.635 "vendor_id": "0x8086", 00:33:46.635 "model_number": "SPDK bdev Controller", 00:33:46.635 "serial_number": "00000000000000000000", 00:33:46.635 "firmware_revision": "25.01", 00:33:46.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:46.635 "oacs": { 00:33:46.635 "security": 0, 00:33:46.635 "format": 0, 00:33:46.635 "firmware": 0, 00:33:46.635 "ns_manage": 0 00:33:46.635 }, 00:33:46.635 "multi_ctrlr": true, 00:33:46.635 "ana_reporting": false 00:33:46.635 }, 00:33:46.635 "vs": { 00:33:46.635 "nvme_version": "1.3" 00:33:46.635 }, 00:33:46.635 "ns_data": { 00:33:46.635 "id": 1, 00:33:46.635 "can_share": true 00:33:46.635 } 00:33:46.635 } 00:33:46.635 ], 00:33:46.635 "mp_policy": "active_passive" 00:33:46.635 } 00:33:46.635 } 00:33:46.635 ] 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.Mg96QjtNMe 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:46.635 rmmod nvme_tcp 00:33:46.635 rmmod nvme_fabrics 00:33:46.635 rmmod nvme_keyring 00:33:46.635 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:46.908 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:33:46.908 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:33:46.908 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 3185997 ']' 00:33:46.908 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 3185997 00:33:46.908 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3185997 ']' 00:33:46.908 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3185997 00:33:46.908 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:33:46.908 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:46.908 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3185997 00:33:46.908 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:46.908 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:46.908 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3185997' 00:33:46.908 killing process with pid 3185997 00:33:46.908 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3185997 00:33:46.908 14:44:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3185997 00:33:47.852 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:47.852 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:47.852 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:47.852 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:33:47.852 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:33:47.852 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:47.852 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:33:47.852 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:47.852 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:47.852 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.852 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:47.852 14:44:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:49.768 14:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:49.768 00:33:49.768 real 0m12.056s 00:33:49.768 user 0m4.491s 00:33:49.768 sys 0m5.996s 00:33:49.769 14:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:49.769 14:44:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:33:49.769 ************************************ 00:33:49.769 END TEST nvmf_async_init 00:33:49.769 ************************************ 00:33:49.769 14:44:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:33:49.769 14:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:49.769 14:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:49.769 14:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:49.769 ************************************ 00:33:49.769 START TEST dma 00:33:49.769 ************************************ 00:33:49.769 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:33:50.030 * Looking for test storage... 00:33:50.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:33:50.030 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:50.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.031 --rc genhtml_branch_coverage=1 00:33:50.031 --rc genhtml_function_coverage=1 00:33:50.031 --rc genhtml_legend=1 00:33:50.031 --rc geninfo_all_blocks=1 00:33:50.031 --rc geninfo_unexecuted_blocks=1 00:33:50.031 00:33:50.031 ' 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:50.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.031 --rc genhtml_branch_coverage=1 00:33:50.031 --rc genhtml_function_coverage=1 00:33:50.031 --rc genhtml_legend=1 00:33:50.031 --rc geninfo_all_blocks=1 00:33:50.031 --rc geninfo_unexecuted_blocks=1 00:33:50.031 00:33:50.031 ' 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:50.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.031 --rc genhtml_branch_coverage=1 00:33:50.031 --rc genhtml_function_coverage=1 00:33:50.031 --rc genhtml_legend=1 00:33:50.031 --rc geninfo_all_blocks=1 00:33:50.031 --rc geninfo_unexecuted_blocks=1 00:33:50.031 00:33:50.031 ' 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:50.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.031 --rc genhtml_branch_coverage=1 00:33:50.031 --rc genhtml_function_coverage=1 00:33:50.031 --rc genhtml_legend=1 00:33:50.031 --rc geninfo_all_blocks=1 00:33:50.031 --rc geninfo_unexecuted_blocks=1 00:33:50.031 00:33:50.031 ' 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:50.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:33:50.031 00:33:50.031 real 0m0.241s 00:33:50.031 user 0m0.141s 00:33:50.031 sys 0m0.113s 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:50.031 14:44:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:33:50.031 ************************************ 00:33:50.031 END TEST dma 00:33:50.031 ************************************ 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:50.294 ************************************ 00:33:50.294 START TEST nvmf_identify 00:33:50.294 ************************************ 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:33:50.294 * Looking for test storage... 00:33:50.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:50.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.294 --rc genhtml_branch_coverage=1 00:33:50.294 --rc genhtml_function_coverage=1 00:33:50.294 --rc genhtml_legend=1 00:33:50.294 --rc geninfo_all_blocks=1 00:33:50.294 --rc geninfo_unexecuted_blocks=1 00:33:50.294 00:33:50.294 ' 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:50.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.294 --rc genhtml_branch_coverage=1 00:33:50.294 --rc genhtml_function_coverage=1 00:33:50.294 --rc genhtml_legend=1 00:33:50.294 --rc geninfo_all_blocks=1 00:33:50.294 --rc geninfo_unexecuted_blocks=1 00:33:50.294 00:33:50.294 ' 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:50.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.294 --rc genhtml_branch_coverage=1 00:33:50.294 --rc genhtml_function_coverage=1 00:33:50.294 --rc genhtml_legend=1 00:33:50.294 --rc geninfo_all_blocks=1 00:33:50.294 --rc geninfo_unexecuted_blocks=1 00:33:50.294 00:33:50.294 ' 00:33:50.294 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:50.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.295 --rc genhtml_branch_coverage=1 00:33:50.295 --rc genhtml_function_coverage=1 00:33:50.295 --rc genhtml_legend=1 00:33:50.295 --rc geninfo_all_blocks=1 00:33:50.295 --rc geninfo_unexecuted_blocks=1 00:33:50.295 00:33:50.295 ' 00:33:50.295 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:50.295 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:33:50.295 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:50.295 14:44:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:50.295 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:50.295 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:50.295 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:50.295 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:50.295 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:50.295 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:50.295 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:50.556 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:50.556 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:50.556 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:50.556 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:50.556 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:50.556 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:50.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:33:50.557 14:44:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:58.708 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:58.708 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:58.709 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:58.709 Found net devices under 0000:31:00.0: cvl_0_0 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:58.709 Found net devices under 0000:31:00.1: cvl_0_1 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:58.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:58.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:33:58.709 00:33:58.709 --- 10.0.0.2 ping statistics --- 00:33:58.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.709 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:58.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:58.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:33:58.709 00:33:58.709 --- 10.0.0.1 ping statistics --- 00:33:58.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.709 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3190802 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3190802 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3190802 ']' 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:58.709 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:58.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:58.710 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:58.710 14:44:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:58.710 [2024-10-07 14:44:21.515836] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:33:58.710 [2024-10-07 14:44:21.515962] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:58.710 [2024-10-07 14:44:21.655425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:58.710 [2024-10-07 14:44:21.837014] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:58.710 [2024-10-07 14:44:21.837067] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:58.710 [2024-10-07 14:44:21.837079] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:58.710 [2024-10-07 14:44:21.837091] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:58.710 [2024-10-07 14:44:21.837101] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:58.710 [2024-10-07 14:44:21.839610] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:58.710 [2024-10-07 14:44:21.839693] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:33:58.710 [2024-10-07 14:44:21.839827] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:58.710 [2024-10-07 14:44:21.839849] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:33:58.710 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:58.710 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:33:58.710 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:58.710 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.710 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:58.710 [2024-10-07 14:44:22.297396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:58.710 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.710 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:33:58.710 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:58.710 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:58.710 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:58.710 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.710 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:58.710 Malloc0 00:33:58.710 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.710 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:58.710 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.710 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:58.710 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.972 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:33:58.972 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.972 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:58.972 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.972 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:58.972 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.972 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:58.973 [2024-10-07 14:44:22.435937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.973 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.973 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:58.973 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.973 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:58.973 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.973 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:33:58.973 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.973 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:58.973 [ 00:33:58.973 { 00:33:58.973 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:58.973 "subtype": "Discovery", 00:33:58.973 "listen_addresses": [ 00:33:58.973 { 00:33:58.973 "trtype": "TCP", 00:33:58.973 "adrfam": "IPv4", 00:33:58.973 "traddr": "10.0.0.2", 00:33:58.973 "trsvcid": "4420" 00:33:58.973 } 00:33:58.973 ], 00:33:58.973 "allow_any_host": true, 00:33:58.973 "hosts": [] 00:33:58.973 }, 00:33:58.973 { 00:33:58.973 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:58.973 "subtype": "NVMe", 00:33:58.973 "listen_addresses": [ 00:33:58.973 { 00:33:58.973 "trtype": "TCP", 00:33:58.973 "adrfam": "IPv4", 00:33:58.973 "traddr": "10.0.0.2", 00:33:58.973 "trsvcid": "4420" 00:33:58.973 } 00:33:58.973 ], 00:33:58.973 "allow_any_host": true, 00:33:58.973 "hosts": [], 00:33:58.973 "serial_number": "SPDK00000000000001", 00:33:58.973 "model_number": "SPDK bdev Controller", 00:33:58.973 "max_namespaces": 32, 00:33:58.973 "min_cntlid": 1, 00:33:58.973 "max_cntlid": 65519, 00:33:58.973 "namespaces": [ 00:33:58.973 { 00:33:58.973 "nsid": 1, 00:33:58.973 "bdev_name": "Malloc0", 00:33:58.973 "name": "Malloc0", 00:33:58.973 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:33:58.973 "eui64": "ABCDEF0123456789", 00:33:58.973 "uuid": "5b1eea9c-4a13-4dc2-987f-a51b0d55323a" 00:33:58.973 } 00:33:58.973 ] 00:33:58.973 } 00:33:58.973 ] 00:33:58.973 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.973 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:33:58.973 [2024-10-07 14:44:22.519905] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:33:58.973 [2024-10-07 14:44:22.519992] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3191150 ] 00:33:58.973 [2024-10-07 14:44:22.572572] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:33:58.973 [2024-10-07 14:44:22.572669] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:33:58.973 [2024-10-07 14:44:22.572687] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:33:58.973 [2024-10-07 14:44:22.572710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:33:58.973 [2024-10-07 14:44:22.572730] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:33:58.973 [2024-10-07 14:44:22.573469] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:33:58.973 [2024-10-07 14:44:22.573522] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000025600 0 00:33:58.973 [2024-10-07 14:44:22.584024] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:33:58.973 [2024-10-07 14:44:22.584049] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:33:58.973 [2024-10-07 14:44:22.584058] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:33:58.973 [2024-10-07 14:44:22.584065] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:33:58.973 [2024-10-07 14:44:22.584122] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:58.973 [2024-10-07 14:44:22.584136] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:58.973 [2024-10-07 14:44:22.584146] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:58.973 [2024-10-07 14:44:22.584170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:33:58.973 [2024-10-07 14:44:22.584197] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:58.973 [2024-10-07 14:44:22.592023] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:58.973 [2024-10-07 14:44:22.592044] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:58.973 [2024-10-07 14:44:22.592051] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:58.973 [2024-10-07 14:44:22.592060] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:58.973 [2024-10-07 14:44:22.592080] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:33:58.973 [2024-10-07 14:44:22.592095] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:33:58.973 [2024-10-07 14:44:22.592104] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:33:58.973 [2024-10-07 14:44:22.592121] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:58.973 [2024-10-07 14:44:22.592132] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:58.973 [2024-10-07 14:44:22.592139] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:58.973 [2024-10-07 14:44:22.592154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.973 [2024-10-07 14:44:22.592177] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:58.973 [2024-10-07 14:44:22.592384] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:58.973 [2024-10-07 14:44:22.592396] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:58.973 [2024-10-07 14:44:22.592402] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:58.973 [2024-10-07 14:44:22.592410] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:58.973 [2024-10-07 14:44:22.592422] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:33:58.973 [2024-10-07 14:44:22.592443] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:33:58.973 [2024-10-07 14:44:22.592454] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:58.973 [2024-10-07 14:44:22.592461] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:58.973 [2024-10-07 14:44:22.592468] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:58.973 [2024-10-07 14:44:22.592483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.973 [2024-10-07 14:44:22.592502] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:58.973 [2024-10-07 14:44:22.592689] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:58.973 [2024-10-07 14:44:22.592699] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:58.973 [2024-10-07 14:44:22.592705] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:58.973 [2024-10-07 14:44:22.592711] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:58.973 [2024-10-07 14:44:22.592720] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:33:58.973 [2024-10-07 14:44:22.592733] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:33:58.973 [2024-10-07 14:44:22.592744] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:58.973 [2024-10-07 14:44:22.592751] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:58.973 [2024-10-07 14:44:22.592760] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:58.973 [2024-10-07 14:44:22.592772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.973 [2024-10-07 14:44:22.592793] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:58.973 [2024-10-07 14:44:22.593034] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:58.973 [2024-10-07 14:44:22.593048] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:58.973 [2024-10-07 14:44:22.593054] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:58.973 [2024-10-07 14:44:22.593060] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:58.973 [2024-10-07 14:44:22.593069] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:33:58.973 [2024-10-07 14:44:22.593084] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:58.973 [2024-10-07 14:44:22.593091] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:58.973 [2024-10-07 14:44:22.593098] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:58.973 [2024-10-07 14:44:22.593109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.973 [2024-10-07 14:44:22.593124] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:58.973 [2024-10-07 14:44:22.593311] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:58.973 [2024-10-07 14:44:22.593320] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:58.973 [2024-10-07 14:44:22.593326] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:58.973 [2024-10-07 14:44:22.593332] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:58.973 [2024-10-07 14:44:22.593340] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:33:58.973 [2024-10-07 14:44:22.593365] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:33:58.973 [2024-10-07 14:44:22.593378] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:33:58.973 [2024-10-07 14:44:22.593488] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:33:58.973 [2024-10-07 14:44:22.593496] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:33:58.973 [2024-10-07 14:44:22.593515] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:58.973 [2024-10-07 14:44:22.593524] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:58.973 [2024-10-07 14:44:22.593531] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:58.973 [2024-10-07 14:44:22.593543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.973 [2024-10-07 14:44:22.593558] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:58.973 [2024-10-07 14:44:22.593774] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:58.974 [2024-10-07 14:44:22.593784] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:58.974 [2024-10-07 14:44:22.593790] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.593796] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:58.974 [2024-10-07 14:44:22.593804] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:33:58.974 [2024-10-07 14:44:22.593821] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.593828] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.593834] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:58.974 [2024-10-07 14:44:22.593846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.974 [2024-10-07 14:44:22.593860] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:58.974 [2024-10-07 14:44:22.594081] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:58.974 [2024-10-07 14:44:22.594092] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:58.974 [2024-10-07 14:44:22.594098] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.594104] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:58.974 [2024-10-07 14:44:22.594115] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:33:58.974 [2024-10-07 14:44:22.594124] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:33:58.974 [2024-10-07 14:44:22.594136] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:33:58.974 [2024-10-07 14:44:22.594146] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:33:58.974 [2024-10-07 14:44:22.594164] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.594174] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:58.974 [2024-10-07 14:44:22.594187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.974 [2024-10-07 14:44:22.594203] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:58.974 [2024-10-07 14:44:22.594458] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:58.974 [2024-10-07 14:44:22.594471] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:58.974 [2024-10-07 14:44:22.594478] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.594486] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=0 00:33:58.974 [2024-10-07 14:44:22.594497] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:33:58.974 [2024-10-07 14:44:22.594505] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.594523] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.594531] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.640017] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:58.974 [2024-10-07 14:44:22.640038] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:58.974 [2024-10-07 14:44:22.640044] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.640052] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:58.974 [2024-10-07 14:44:22.640071] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:33:58.974 [2024-10-07 14:44:22.640080] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:33:58.974 [2024-10-07 14:44:22.640091] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:33:58.974 [2024-10-07 14:44:22.640102] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:33:58.974 [2024-10-07 14:44:22.640110] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:33:58.974 [2024-10-07 14:44:22.640119] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:33:58.974 [2024-10-07 14:44:22.640135] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:33:58.974 [2024-10-07 14:44:22.640147] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.640155] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.640163] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:58.974 [2024-10-07 14:44:22.640180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:58.974 [2024-10-07 14:44:22.640203] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:58.974 [2024-10-07 14:44:22.640437] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:58.974 [2024-10-07 14:44:22.640447] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:58.974 [2024-10-07 14:44:22.640452] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.640459] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:58.974 [2024-10-07 14:44:22.640472] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.640479] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.640486] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:58.974 [2024-10-07 14:44:22.640498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.974 [2024-10-07 14:44:22.640507] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.640513] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.640518] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000025600) 00:33:58.974 [2024-10-07 14:44:22.640531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.974 [2024-10-07 14:44:22.640540] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.640545] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.640551] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000025600) 00:33:58.974 [2024-10-07 14:44:22.640561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.974 [2024-10-07 14:44:22.640569] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.640575] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.640580] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:33:58.974 [2024-10-07 14:44:22.640590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.974 [2024-10-07 14:44:22.640597] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:33:58.974 [2024-10-07 14:44:22.640620] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:33:58.974 [2024-10-07 14:44:22.640630] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.640636] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:33:58.974 [2024-10-07 14:44:22.640649] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.974 [2024-10-07 14:44:22.640673] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:58.974 [2024-10-07 14:44:22.640682] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:33:58.974 [2024-10-07 14:44:22.640689] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:33:58.974 [2024-10-07 14:44:22.640696] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:33:58.974 [2024-10-07 14:44:22.640703] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:33:58.974 [2024-10-07 14:44:22.640959] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:58.974 [2024-10-07 14:44:22.640968] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:58.974 [2024-10-07 14:44:22.640974] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.640980] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:33:58.974 [2024-10-07 14:44:22.640990] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:33:58.974 [2024-10-07 14:44:22.640998] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:33:58.974 [2024-10-07 14:44:22.641032] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.641040] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:33:58.974 [2024-10-07 14:44:22.641052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.974 [2024-10-07 14:44:22.641068] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:33:58.974 [2024-10-07 14:44:22.641287] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:58.974 [2024-10-07 14:44:22.641297] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:58.974 [2024-10-07 14:44:22.641303] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.641314] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=4 00:33:58.974 [2024-10-07 14:44:22.641325] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:33:58.974 [2024-10-07 14:44:22.641332] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.641358] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.641366] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.641514] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:58.974 [2024-10-07 14:44:22.641524] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:58.974 [2024-10-07 14:44:22.641529] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.641536] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:33:58.974 [2024-10-07 14:44:22.641557] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:33:58.974 [2024-10-07 14:44:22.641596] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:58.974 [2024-10-07 14:44:22.641604] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:33:58.974 [2024-10-07 14:44:22.641619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.974 [2024-10-07 14:44:22.641630] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:58.975 [2024-10-07 14:44:22.641636] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:58.975 [2024-10-07 14:44:22.641643] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:33:58.975 [2024-10-07 14:44:22.641654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:33:58.975 [2024-10-07 14:44:22.641671] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:33:58.975 [2024-10-07 14:44:22.641679] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:33:58.975 [2024-10-07 14:44:22.641957] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:58.975 [2024-10-07 14:44:22.641967] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:58.975 [2024-10-07 14:44:22.641973] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:58.975 [2024-10-07 14:44:22.641980] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=1024, cccid=4 00:33:58.975 [2024-10-07 14:44:22.641988] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=1024 00:33:58.975 [2024-10-07 14:44:22.641997] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:58.975 [2024-10-07 14:44:22.642017] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:58.975 [2024-10-07 14:44:22.642024] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:58.975 [2024-10-07 14:44:22.642037] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:58.975 [2024-10-07 14:44:22.642045] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:58.975 [2024-10-07 14:44:22.642051] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:58.975 [2024-10-07 14:44:22.642058] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:33:59.236 [2024-10-07 14:44:22.682200] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.236 [2024-10-07 14:44:22.682219] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.236 [2024-10-07 14:44:22.682225] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.236 [2024-10-07 14:44:22.682239] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:33:59.236 [2024-10-07 14:44:22.682266] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.236 [2024-10-07 14:44:22.682277] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:33:59.236 [2024-10-07 14:44:22.682291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.236 [2024-10-07 14:44:22.682313] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:33:59.236 [2024-10-07 14:44:22.682511] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:59.236 [2024-10-07 14:44:22.682521] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:59.236 [2024-10-07 14:44:22.682527] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:59.236 [2024-10-07 14:44:22.682533] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=3072, cccid=4 00:33:59.236 [2024-10-07 14:44:22.682540] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=3072 00:33:59.236 [2024-10-07 14:44:22.682547] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.236 [2024-10-07 14:44:22.682569] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:59.236 [2024-10-07 14:44:22.682576] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:59.236 [2024-10-07 14:44:22.724204] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.236 [2024-10-07 14:44:22.724224] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.236 [2024-10-07 14:44:22.724230] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.236 [2024-10-07 14:44:22.724236] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:33:59.236 [2024-10-07 14:44:22.724256] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.236 [2024-10-07 14:44:22.724263] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:33:59.236 [2024-10-07 14:44:22.724281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.236 [2024-10-07 14:44:22.724303] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:33:59.236 [2024-10-07 14:44:22.724554] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:59.236 [2024-10-07 14:44:22.724566] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:59.236 [2024-10-07 14:44:22.724572] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:59.236 [2024-10-07 14:44:22.724578] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=8, cccid=4 00:33:59.236 [2024-10-07 14:44:22.724586] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=8 00:33:59.236 [2024-10-07 14:44:22.724592] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.236 [2024-10-07 14:44:22.724604] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:59.236 [2024-10-07 14:44:22.724610] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:59.236 [2024-10-07 14:44:22.766184] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.236 [2024-10-07 14:44:22.766203] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.236 [2024-10-07 14:44:22.766209] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.236 [2024-10-07 14:44:22.766216] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:33:59.236 ===================================================== 00:33:59.236 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:59.236 ===================================================== 00:33:59.236 Controller Capabilities/Features 00:33:59.236 ================================ 00:33:59.236 Vendor ID: 0000 00:33:59.236 Subsystem Vendor ID: 0000 00:33:59.236 Serial Number: .................... 00:33:59.236 Model Number: ........................................ 00:33:59.236 Firmware Version: 25.01 00:33:59.236 Recommended Arb Burst: 0 00:33:59.236 IEEE OUI Identifier: 00 00 00 00:33:59.236 Multi-path I/O 00:33:59.236 May have multiple subsystem ports: No 00:33:59.236 May have multiple controllers: No 00:33:59.236 Associated with SR-IOV VF: No 00:33:59.236 Max Data Transfer Size: 131072 00:33:59.236 Max Number of Namespaces: 0 00:33:59.236 Max Number of I/O Queues: 1024 00:33:59.236 NVMe Specification Version (VS): 1.3 00:33:59.236 NVMe Specification Version (Identify): 1.3 00:33:59.236 Maximum Queue Entries: 128 00:33:59.236 Contiguous Queues Required: Yes 00:33:59.236 Arbitration Mechanisms Supported 00:33:59.236 Weighted Round Robin: Not Supported 00:33:59.236 Vendor Specific: Not Supported 00:33:59.236 Reset Timeout: 15000 ms 00:33:59.236 Doorbell Stride: 4 bytes 00:33:59.236 NVM Subsystem Reset: Not Supported 00:33:59.236 Command Sets Supported 00:33:59.236 NVM Command Set: Supported 00:33:59.236 Boot Partition: Not Supported 00:33:59.236 Memory Page Size Minimum: 4096 bytes 00:33:59.236 Memory Page Size Maximum: 4096 bytes 00:33:59.236 Persistent Memory Region: Not Supported 00:33:59.236 Optional Asynchronous Events Supported 00:33:59.236 Namespace Attribute Notices: Not Supported 00:33:59.236 Firmware Activation Notices: Not Supported 00:33:59.236 ANA Change Notices: Not Supported 00:33:59.236 PLE Aggregate Log Change Notices: Not Supported 00:33:59.236 LBA Status Info Alert Notices: Not Supported 00:33:59.236 EGE Aggregate Log Change Notices: Not Supported 00:33:59.236 Normal NVM Subsystem Shutdown event: Not Supported 00:33:59.236 Zone Descriptor Change Notices: Not Supported 00:33:59.236 Discovery Log Change Notices: Supported 00:33:59.236 Controller Attributes 00:33:59.236 128-bit Host Identifier: Not Supported 00:33:59.236 Non-Operational Permissive Mode: Not Supported 00:33:59.236 NVM Sets: Not Supported 00:33:59.237 Read Recovery Levels: Not Supported 00:33:59.237 Endurance Groups: Not Supported 00:33:59.237 Predictable Latency Mode: Not Supported 00:33:59.237 Traffic Based Keep ALive: Not Supported 00:33:59.237 Namespace Granularity: Not Supported 00:33:59.237 SQ Associations: Not Supported 00:33:59.237 UUID List: Not Supported 00:33:59.237 Multi-Domain Subsystem: Not Supported 00:33:59.237 Fixed Capacity Management: Not Supported 00:33:59.237 Variable Capacity Management: Not Supported 00:33:59.237 Delete Endurance Group: Not Supported 00:33:59.237 Delete NVM Set: Not Supported 00:33:59.237 Extended LBA Formats Supported: Not Supported 00:33:59.237 Flexible Data Placement Supported: Not Supported 00:33:59.237 00:33:59.237 Controller Memory Buffer Support 00:33:59.237 ================================ 00:33:59.237 Supported: No 00:33:59.237 00:33:59.237 Persistent Memory Region Support 00:33:59.237 ================================ 00:33:59.237 Supported: No 00:33:59.237 00:33:59.237 Admin Command Set Attributes 00:33:59.237 ============================ 00:33:59.237 Security Send/Receive: Not Supported 00:33:59.237 Format NVM: Not Supported 00:33:59.237 Firmware Activate/Download: Not Supported 00:33:59.237 Namespace Management: Not Supported 00:33:59.237 Device Self-Test: Not Supported 00:33:59.237 Directives: Not Supported 00:33:59.237 NVMe-MI: Not Supported 00:33:59.237 Virtualization Management: Not Supported 00:33:59.237 Doorbell Buffer Config: Not Supported 00:33:59.237 Get LBA Status Capability: Not Supported 00:33:59.237 Command & Feature Lockdown Capability: Not Supported 00:33:59.237 Abort Command Limit: 1 00:33:59.237 Async Event Request Limit: 4 00:33:59.237 Number of Firmware Slots: N/A 00:33:59.237 Firmware Slot 1 Read-Only: N/A 00:33:59.237 Firmware Activation Without Reset: N/A 00:33:59.237 Multiple Update Detection Support: N/A 00:33:59.237 Firmware Update Granularity: No Information Provided 00:33:59.237 Per-Namespace SMART Log: No 00:33:59.237 Asymmetric Namespace Access Log Page: Not Supported 00:33:59.237 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:59.237 Command Effects Log Page: Not Supported 00:33:59.237 Get Log Page Extended Data: Supported 00:33:59.237 Telemetry Log Pages: Not Supported 00:33:59.237 Persistent Event Log Pages: Not Supported 00:33:59.237 Supported Log Pages Log Page: May Support 00:33:59.237 Commands Supported & Effects Log Page: Not Supported 00:33:59.237 Feature Identifiers & Effects Log Page:May Support 00:33:59.237 NVMe-MI Commands & Effects Log Page: May Support 00:33:59.237 Data Area 4 for Telemetry Log: Not Supported 00:33:59.237 Error Log Page Entries Supported: 128 00:33:59.237 Keep Alive: Not Supported 00:33:59.237 00:33:59.237 NVM Command Set Attributes 00:33:59.237 ========================== 00:33:59.237 Submission Queue Entry Size 00:33:59.237 Max: 1 00:33:59.237 Min: 1 00:33:59.237 Completion Queue Entry Size 00:33:59.237 Max: 1 00:33:59.237 Min: 1 00:33:59.237 Number of Namespaces: 0 00:33:59.237 Compare Command: Not Supported 00:33:59.237 Write Uncorrectable Command: Not Supported 00:33:59.237 Dataset Management Command: Not Supported 00:33:59.237 Write Zeroes Command: Not Supported 00:33:59.237 Set Features Save Field: Not Supported 00:33:59.237 Reservations: Not Supported 00:33:59.237 Timestamp: Not Supported 00:33:59.237 Copy: Not Supported 00:33:59.237 Volatile Write Cache: Not Present 00:33:59.237 Atomic Write Unit (Normal): 1 00:33:59.237 Atomic Write Unit (PFail): 1 00:33:59.237 Atomic Compare & Write Unit: 1 00:33:59.237 Fused Compare & Write: Supported 00:33:59.237 Scatter-Gather List 00:33:59.237 SGL Command Set: Supported 00:33:59.237 SGL Keyed: Supported 00:33:59.237 SGL Bit Bucket Descriptor: Not Supported 00:33:59.237 SGL Metadata Pointer: Not Supported 00:33:59.237 Oversized SGL: Not Supported 00:33:59.237 SGL Metadata Address: Not Supported 00:33:59.237 SGL Offset: Supported 00:33:59.237 Transport SGL Data Block: Not Supported 00:33:59.237 Replay Protected Memory Block: Not Supported 00:33:59.237 00:33:59.237 Firmware Slot Information 00:33:59.237 ========================= 00:33:59.237 Active slot: 0 00:33:59.237 00:33:59.237 00:33:59.237 Error Log 00:33:59.237 ========= 00:33:59.237 00:33:59.237 Active Namespaces 00:33:59.237 ================= 00:33:59.237 Discovery Log Page 00:33:59.237 ================== 00:33:59.237 Generation Counter: 2 00:33:59.237 Number of Records: 2 00:33:59.237 Record Format: 0 00:33:59.237 00:33:59.237 Discovery Log Entry 0 00:33:59.237 ---------------------- 00:33:59.237 Transport Type: 3 (TCP) 00:33:59.237 Address Family: 1 (IPv4) 00:33:59.237 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:59.237 Entry Flags: 00:33:59.237 Duplicate Returned Information: 1 00:33:59.237 Explicit Persistent Connection Support for Discovery: 1 00:33:59.237 Transport Requirements: 00:33:59.237 Secure Channel: Not Required 00:33:59.237 Port ID: 0 (0x0000) 00:33:59.237 Controller ID: 65535 (0xffff) 00:33:59.237 Admin Max SQ Size: 128 00:33:59.237 Transport Service Identifier: 4420 00:33:59.237 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:59.237 Transport Address: 10.0.0.2 00:33:59.237 Discovery Log Entry 1 00:33:59.237 ---------------------- 00:33:59.237 Transport Type: 3 (TCP) 00:33:59.237 Address Family: 1 (IPv4) 00:33:59.237 Subsystem Type: 2 (NVM Subsystem) 00:33:59.237 Entry Flags: 00:33:59.237 Duplicate Returned Information: 0 00:33:59.237 Explicit Persistent Connection Support for Discovery: 0 00:33:59.237 Transport Requirements: 00:33:59.237 Secure Channel: Not Required 00:33:59.237 Port ID: 0 (0x0000) 00:33:59.237 Controller ID: 65535 (0xffff) 00:33:59.237 Admin Max SQ Size: 128 00:33:59.237 Transport Service Identifier: 4420 00:33:59.237 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:33:59.237 Transport Address: 10.0.0.2 [2024-10-07 14:44:22.766354] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:33:59.237 [2024-10-07 14:44:22.766370] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:59.237 [2024-10-07 14:44:22.766383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.237 [2024-10-07 14:44:22.766392] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000025600 00:33:59.237 [2024-10-07 14:44:22.766402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.237 [2024-10-07 14:44:22.766410] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000025600 00:33:59.237 [2024-10-07 14:44:22.766418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.237 [2024-10-07 14:44:22.766425] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:33:59.237 [2024-10-07 14:44:22.766433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.237 [2024-10-07 14:44:22.766447] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.237 [2024-10-07 14:44:22.766454] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.237 [2024-10-07 14:44:22.766461] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:33:59.237 [2024-10-07 14:44:22.766474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.237 [2024-10-07 14:44:22.766495] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:33:59.237 [2024-10-07 14:44:22.766682] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.237 [2024-10-07 14:44:22.766692] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.237 [2024-10-07 14:44:22.766698] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.237 [2024-10-07 14:44:22.766705] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:33:59.237 [2024-10-07 14:44:22.766718] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.237 [2024-10-07 14:44:22.766725] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.237 [2024-10-07 14:44:22.766731] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:33:59.237 [2024-10-07 14:44:22.766743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.237 [2024-10-07 14:44:22.766765] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:33:59.237 [2024-10-07 14:44:22.766974] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.237 [2024-10-07 14:44:22.766984] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.237 [2024-10-07 14:44:22.766989] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.237 [2024-10-07 14:44:22.766996] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:33:59.237 [2024-10-07 14:44:22.771017] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:33:59.237 [2024-10-07 14:44:22.771030] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:33:59.237 [2024-10-07 14:44:22.771049] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.237 [2024-10-07 14:44:22.771056] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.237 [2024-10-07 14:44:22.771063] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:33:59.237 [2024-10-07 14:44:22.771076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.237 [2024-10-07 14:44:22.771096] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:33:59.237 [2024-10-07 14:44:22.771279] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.237 [2024-10-07 14:44:22.771288] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.237 [2024-10-07 14:44:22.771294] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.237 [2024-10-07 14:44:22.771300] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:33:59.237 [2024-10-07 14:44:22.771317] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 0 milliseconds 00:33:59.237 00:33:59.237 14:44:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:33:59.237 [2024-10-07 14:44:22.868434] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:33:59.237 [2024-10-07 14:44:22.868523] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3191154 ] 00:33:59.237 [2024-10-07 14:44:22.922393] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:33:59.237 [2024-10-07 14:44:22.922489] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:33:59.237 [2024-10-07 14:44:22.922502] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:33:59.237 [2024-10-07 14:44:22.922525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:33:59.237 [2024-10-07 14:44:22.922542] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:33:59.237 [2024-10-07 14:44:22.923260] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:33:59.237 [2024-10-07 14:44:22.923306] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000025600 0 00:33:59.237 [2024-10-07 14:44:22.937022] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:33:59.237 [2024-10-07 14:44:22.937044] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:33:59.237 [2024-10-07 14:44:22.937053] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:33:59.237 [2024-10-07 14:44:22.937059] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:33:59.237 [2024-10-07 14:44:22.937107] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.237 [2024-10-07 14:44:22.937119] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.237 [2024-10-07 14:44:22.937129] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:59.237 [2024-10-07 14:44:22.937150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:33:59.237 [2024-10-07 14:44:22.937175] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:59.237 [2024-10-07 14:44:22.944020] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.237 [2024-10-07 14:44:22.944041] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.237 [2024-10-07 14:44:22.944047] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.237 [2024-10-07 14:44:22.944056] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:59.237 [2024-10-07 14:44:22.944075] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:33:59.237 [2024-10-07 14:44:22.944089] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:33:59.237 [2024-10-07 14:44:22.944104] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:33:59.237 [2024-10-07 14:44:22.944119] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.237 [2024-10-07 14:44:22.944130] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.237 [2024-10-07 14:44:22.944137] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:59.237 [2024-10-07 14:44:22.944155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.237 [2024-10-07 14:44:22.944177] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:59.237 [2024-10-07 14:44:22.944367] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.237 [2024-10-07 14:44:22.944380] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.237 [2024-10-07 14:44:22.944386] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.237 [2024-10-07 14:44:22.944394] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:59.237 [2024-10-07 14:44:22.944405] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:33:59.237 [2024-10-07 14:44:22.944419] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:33:59.237 [2024-10-07 14:44:22.944432] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.237 [2024-10-07 14:44:22.944439] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.237 [2024-10-07 14:44:22.944446] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:59.237 [2024-10-07 14:44:22.944460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.237 [2024-10-07 14:44:22.944477] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:59.501 [2024-10-07 14:44:22.944686] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.501 [2024-10-07 14:44:22.944697] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.501 [2024-10-07 14:44:22.944703] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.944709] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:59.501 [2024-10-07 14:44:22.944719] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:33:59.501 [2024-10-07 14:44:22.944731] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:33:59.501 [2024-10-07 14:44:22.944743] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.944751] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.944758] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:59.501 [2024-10-07 14:44:22.944772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.501 [2024-10-07 14:44:22.944787] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:59.501 [2024-10-07 14:44:22.944991] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.501 [2024-10-07 14:44:22.945008] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.501 [2024-10-07 14:44:22.945014] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.945020] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:59.501 [2024-10-07 14:44:22.945029] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:33:59.501 [2024-10-07 14:44:22.945043] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.945050] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.945057] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:59.501 [2024-10-07 14:44:22.945068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.501 [2024-10-07 14:44:22.945087] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:59.501 [2024-10-07 14:44:22.945248] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.501 [2024-10-07 14:44:22.945257] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.501 [2024-10-07 14:44:22.945263] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.945269] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:59.501 [2024-10-07 14:44:22.945277] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:33:59.501 [2024-10-07 14:44:22.945286] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:33:59.501 [2024-10-07 14:44:22.945299] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:33:59.501 [2024-10-07 14:44:22.945408] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:33:59.501 [2024-10-07 14:44:22.945415] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:33:59.501 [2024-10-07 14:44:22.945434] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.945441] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.945450] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:59.501 [2024-10-07 14:44:22.945461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.501 [2024-10-07 14:44:22.945476] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:59.501 [2024-10-07 14:44:22.945641] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.501 [2024-10-07 14:44:22.945651] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.501 [2024-10-07 14:44:22.945656] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.945663] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:59.501 [2024-10-07 14:44:22.945671] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:33:59.501 [2024-10-07 14:44:22.945687] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.945696] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.945703] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:59.501 [2024-10-07 14:44:22.945714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.501 [2024-10-07 14:44:22.945728] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:59.501 [2024-10-07 14:44:22.945931] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.501 [2024-10-07 14:44:22.945941] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.501 [2024-10-07 14:44:22.945946] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.945953] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:59.501 [2024-10-07 14:44:22.945961] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:33:59.501 [2024-10-07 14:44:22.945969] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:33:59.501 [2024-10-07 14:44:22.945981] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:33:59.501 [2024-10-07 14:44:22.945991] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:33:59.501 [2024-10-07 14:44:22.946016] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.946024] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:59.501 [2024-10-07 14:44:22.946037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.501 [2024-10-07 14:44:22.946055] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:59.501 [2024-10-07 14:44:22.946283] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:59.501 [2024-10-07 14:44:22.946293] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:59.501 [2024-10-07 14:44:22.946300] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.946308] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=0 00:33:59.501 [2024-10-07 14:44:22.946316] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:33:59.501 [2024-10-07 14:44:22.946323] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.946341] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.946348] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.946502] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.501 [2024-10-07 14:44:22.946511] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.501 [2024-10-07 14:44:22.946517] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.946525] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:59.501 [2024-10-07 14:44:22.946540] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:33:59.501 [2024-10-07 14:44:22.946549] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:33:59.501 [2024-10-07 14:44:22.946556] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:33:59.501 [2024-10-07 14:44:22.946566] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:33:59.501 [2024-10-07 14:44:22.946576] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:33:59.501 [2024-10-07 14:44:22.946584] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:33:59.501 [2024-10-07 14:44:22.946599] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:33:59.501 [2024-10-07 14:44:22.946609] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.946616] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.501 [2024-10-07 14:44:22.946623] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:59.502 [2024-10-07 14:44:22.946637] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:59.502 [2024-10-07 14:44:22.946654] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:59.502 [2024-10-07 14:44:22.946824] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.502 [2024-10-07 14:44:22.946834] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.502 [2024-10-07 14:44:22.946839] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.946845] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:59.502 [2024-10-07 14:44:22.946856] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.946866] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.946874] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000025600) 00:33:59.502 [2024-10-07 14:44:22.946886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.502 [2024-10-07 14:44:22.946897] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.946903] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.946909] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000025600) 00:33:59.502 [2024-10-07 14:44:22.946919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.502 [2024-10-07 14:44:22.946927] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.946933] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.946938] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000025600) 00:33:59.502 [2024-10-07 14:44:22.946948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.502 [2024-10-07 14:44:22.946956] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.946962] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.946967] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:33:59.502 [2024-10-07 14:44:22.946977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.502 [2024-10-07 14:44:22.946986] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:33:59.502 [2024-10-07 14:44:22.947005] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:33:59.502 [2024-10-07 14:44:22.947015] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.947022] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:33:59.502 [2024-10-07 14:44:22.947034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.502 [2024-10-07 14:44:22.947051] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:33:59.502 [2024-10-07 14:44:22.947059] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:33:59.502 [2024-10-07 14:44:22.947066] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:33:59.502 [2024-10-07 14:44:22.947073] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:33:59.502 [2024-10-07 14:44:22.947080] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:33:59.502 [2024-10-07 14:44:22.947307] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.502 [2024-10-07 14:44:22.947317] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.502 [2024-10-07 14:44:22.947322] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.947328] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:33:59.502 [2024-10-07 14:44:22.947338] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:33:59.502 [2024-10-07 14:44:22.947346] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:33:59.502 [2024-10-07 14:44:22.947360] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:33:59.502 [2024-10-07 14:44:22.947374] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:33:59.502 [2024-10-07 14:44:22.947383] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.947390] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.947397] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:33:59.502 [2024-10-07 14:44:22.947408] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:59.502 [2024-10-07 14:44:22.947423] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:33:59.502 [2024-10-07 14:44:22.947592] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.502 [2024-10-07 14:44:22.947603] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.502 [2024-10-07 14:44:22.947608] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.947615] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:33:59.502 [2024-10-07 14:44:22.947700] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:33:59.502 [2024-10-07 14:44:22.947719] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:33:59.502 [2024-10-07 14:44:22.947733] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.947740] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:33:59.502 [2024-10-07 14:44:22.947752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.502 [2024-10-07 14:44:22.947769] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:33:59.502 [2024-10-07 14:44:22.947985] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:59.502 [2024-10-07 14:44:22.947995] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:59.502 [2024-10-07 14:44:22.948006] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.948012] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=4 00:33:59.502 [2024-10-07 14:44:22.948019] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:33:59.502 [2024-10-07 14:44:22.948026] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.948040] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.948046] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.991012] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.502 [2024-10-07 14:44:22.991032] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.502 [2024-10-07 14:44:22.991038] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.991045] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:33:59.502 [2024-10-07 14:44:22.991073] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:33:59.502 [2024-10-07 14:44:22.991091] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:33:59.502 [2024-10-07 14:44:22.991107] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:33:59.502 [2024-10-07 14:44:22.991121] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.991128] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:33:59.502 [2024-10-07 14:44:22.991147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.502 [2024-10-07 14:44:22.991167] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:33:59.502 [2024-10-07 14:44:22.991364] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:59.502 [2024-10-07 14:44:22.991374] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:59.502 [2024-10-07 14:44:22.991379] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.991386] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=4 00:33:59.502 [2024-10-07 14:44:22.991393] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:33:59.502 [2024-10-07 14:44:22.991405] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.991426] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:22.991433] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:23.033181] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.502 [2024-10-07 14:44:23.033200] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.502 [2024-10-07 14:44:23.033206] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:23.033213] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:33:59.502 [2024-10-07 14:44:23.033237] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:33:59.502 [2024-10-07 14:44:23.033253] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:33:59.502 [2024-10-07 14:44:23.033269] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:23.033279] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:33:59.502 [2024-10-07 14:44:23.033291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.502 [2024-10-07 14:44:23.033309] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:33:59.502 [2024-10-07 14:44:23.033428] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:59.502 [2024-10-07 14:44:23.033437] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:59.502 [2024-10-07 14:44:23.033443] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:23.033450] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=4 00:33:59.502 [2024-10-07 14:44:23.033457] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:33:59.502 [2024-10-07 14:44:23.033463] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:23.033474] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:23.033479] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:23.074181] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.502 [2024-10-07 14:44:23.074200] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.502 [2024-10-07 14:44:23.074206] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.502 [2024-10-07 14:44:23.074213] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:33:59.502 [2024-10-07 14:44:23.074230] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:33:59.503 [2024-10-07 14:44:23.074243] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:33:59.503 [2024-10-07 14:44:23.074260] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:33:59.503 [2024-10-07 14:44:23.074269] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:33:59.503 [2024-10-07 14:44:23.074278] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:33:59.503 [2024-10-07 14:44:23.074287] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:33:59.503 [2024-10-07 14:44:23.074295] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:33:59.503 [2024-10-07 14:44:23.074303] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:33:59.503 [2024-10-07 14:44:23.074312] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:33:59.503 [2024-10-07 14:44:23.074344] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.074352] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:33:59.503 [2024-10-07 14:44:23.074365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.503 [2024-10-07 14:44:23.074375] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.074382] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.074389] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:33:59.503 [2024-10-07 14:44:23.074400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:33:59.503 [2024-10-07 14:44:23.074418] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:33:59.503 [2024-10-07 14:44:23.074427] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:33:59.503 [2024-10-07 14:44:23.074522] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.503 [2024-10-07 14:44:23.074533] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.503 [2024-10-07 14:44:23.074539] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.074548] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:33:59.503 [2024-10-07 14:44:23.074560] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.503 [2024-10-07 14:44:23.074568] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.503 [2024-10-07 14:44:23.074574] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.074580] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:33:59.503 [2024-10-07 14:44:23.074593] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.074599] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:33:59.503 [2024-10-07 14:44:23.074612] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.503 [2024-10-07 14:44:23.074627] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:33:59.503 [2024-10-07 14:44:23.074834] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.503 [2024-10-07 14:44:23.074843] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.503 [2024-10-07 14:44:23.074848] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.074854] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:33:59.503 [2024-10-07 14:44:23.074870] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.074876] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:33:59.503 [2024-10-07 14:44:23.074887] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.503 [2024-10-07 14:44:23.074900] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:33:59.503 [2024-10-07 14:44:23.079012] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.503 [2024-10-07 14:44:23.079029] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.503 [2024-10-07 14:44:23.079035] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.079041] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:33:59.503 [2024-10-07 14:44:23.079058] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.079064] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:33:59.503 [2024-10-07 14:44:23.079075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.503 [2024-10-07 14:44:23.079094] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:33:59.503 [2024-10-07 14:44:23.079289] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.503 [2024-10-07 14:44:23.079299] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.503 [2024-10-07 14:44:23.079304] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.079310] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:33:59.503 [2024-10-07 14:44:23.079336] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.079344] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000025600) 00:33:59.503 [2024-10-07 14:44:23.079356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.503 [2024-10-07 14:44:23.079368] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.079374] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000025600) 00:33:59.503 [2024-10-07 14:44:23.079386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.503 [2024-10-07 14:44:23.079397] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.079403] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000025600) 00:33:59.503 [2024-10-07 14:44:23.079414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.503 [2024-10-07 14:44:23.079430] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.079439] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000025600) 00:33:59.503 [2024-10-07 14:44:23.079449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.503 [2024-10-07 14:44:23.079467] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:33:59.503 [2024-10-07 14:44:23.079475] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:33:59.503 [2024-10-07 14:44:23.079482] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:33:59.503 [2024-10-07 14:44:23.079489] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:33:59.503 [2024-10-07 14:44:23.079763] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:59.503 [2024-10-07 14:44:23.079773] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:59.503 [2024-10-07 14:44:23.079779] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.079786] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=8192, cccid=5 00:33:59.503 [2024-10-07 14:44:23.079794] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000025600): expected_datao=0, payload_size=8192 00:33:59.503 [2024-10-07 14:44:23.079802] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.079873] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.079881] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.079891] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:59.503 [2024-10-07 14:44:23.079903] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:59.503 [2024-10-07 14:44:23.079909] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.079915] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=512, cccid=4 00:33:59.503 [2024-10-07 14:44:23.079922] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000025600): expected_datao=0, payload_size=512 00:33:59.503 [2024-10-07 14:44:23.079928] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.079937] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.079943] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.079951] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:59.503 [2024-10-07 14:44:23.079959] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:59.503 [2024-10-07 14:44:23.079964] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.079970] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=512, cccid=6 00:33:59.503 [2024-10-07 14:44:23.079977] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000025600): expected_datao=0, payload_size=512 00:33:59.503 [2024-10-07 14:44:23.079983] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.079992] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.079997] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.080013] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:33:59.503 [2024-10-07 14:44:23.080022] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:33:59.503 [2024-10-07 14:44:23.080027] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.080033] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000025600): datao=0, datal=4096, cccid=7 00:33:59.503 [2024-10-07 14:44:23.080040] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000025600): expected_datao=0, payload_size=4096 00:33:59.503 [2024-10-07 14:44:23.080046] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.080064] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.080070] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.120195] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.503 [2024-10-07 14:44:23.120214] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.503 [2024-10-07 14:44:23.120220] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.120234] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000025600 00:33:59.503 [2024-10-07 14:44:23.120258] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.503 [2024-10-07 14:44:23.120273] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.503 [2024-10-07 14:44:23.120281] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.503 [2024-10-07 14:44:23.120288] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000025600 00:33:59.503 [2024-10-07 14:44:23.120301] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.504 [2024-10-07 14:44:23.120309] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.504 [2024-10-07 14:44:23.120315] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.504 [2024-10-07 14:44:23.120321] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000025600 00:33:59.504 [2024-10-07 14:44:23.120332] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.504 [2024-10-07 14:44:23.120340] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.504 [2024-10-07 14:44:23.120345] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.504 [2024-10-07 14:44:23.120351] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000025600 00:33:59.504 ===================================================== 00:33:59.504 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:59.504 ===================================================== 00:33:59.504 Controller Capabilities/Features 00:33:59.504 ================================ 00:33:59.504 Vendor ID: 8086 00:33:59.504 Subsystem Vendor ID: 8086 00:33:59.504 Serial Number: SPDK00000000000001 00:33:59.504 Model Number: SPDK bdev Controller 00:33:59.504 Firmware Version: 25.01 00:33:59.504 Recommended Arb Burst: 6 00:33:59.504 IEEE OUI Identifier: e4 d2 5c 00:33:59.504 Multi-path I/O 00:33:59.504 May have multiple subsystem ports: Yes 00:33:59.504 May have multiple controllers: Yes 00:33:59.504 Associated with SR-IOV VF: No 00:33:59.504 Max Data Transfer Size: 131072 00:33:59.504 Max Number of Namespaces: 32 00:33:59.504 Max Number of I/O Queues: 127 00:33:59.504 NVMe Specification Version (VS): 1.3 00:33:59.504 NVMe Specification Version (Identify): 1.3 00:33:59.504 Maximum Queue Entries: 128 00:33:59.504 Contiguous Queues Required: Yes 00:33:59.504 Arbitration Mechanisms Supported 00:33:59.504 Weighted Round Robin: Not Supported 00:33:59.504 Vendor Specific: Not Supported 00:33:59.504 Reset Timeout: 15000 ms 00:33:59.504 Doorbell Stride: 4 bytes 00:33:59.504 NVM Subsystem Reset: Not Supported 00:33:59.504 Command Sets Supported 00:33:59.504 NVM Command Set: Supported 00:33:59.504 Boot Partition: Not Supported 00:33:59.504 Memory Page Size Minimum: 4096 bytes 00:33:59.504 Memory Page Size Maximum: 4096 bytes 00:33:59.504 Persistent Memory Region: Not Supported 00:33:59.504 Optional Asynchronous Events Supported 00:33:59.504 Namespace Attribute Notices: Supported 00:33:59.504 Firmware Activation Notices: Not Supported 00:33:59.504 ANA Change Notices: Not Supported 00:33:59.504 PLE Aggregate Log Change Notices: Not Supported 00:33:59.504 LBA Status Info Alert Notices: Not Supported 00:33:59.504 EGE Aggregate Log Change Notices: Not Supported 00:33:59.504 Normal NVM Subsystem Shutdown event: Not Supported 00:33:59.504 Zone Descriptor Change Notices: Not Supported 00:33:59.504 Discovery Log Change Notices: Not Supported 00:33:59.504 Controller Attributes 00:33:59.504 128-bit Host Identifier: Supported 00:33:59.504 Non-Operational Permissive Mode: Not Supported 00:33:59.504 NVM Sets: Not Supported 00:33:59.504 Read Recovery Levels: Not Supported 00:33:59.504 Endurance Groups: Not Supported 00:33:59.504 Predictable Latency Mode: Not Supported 00:33:59.504 Traffic Based Keep ALive: Not Supported 00:33:59.504 Namespace Granularity: Not Supported 00:33:59.504 SQ Associations: Not Supported 00:33:59.504 UUID List: Not Supported 00:33:59.504 Multi-Domain Subsystem: Not Supported 00:33:59.504 Fixed Capacity Management: Not Supported 00:33:59.504 Variable Capacity Management: Not Supported 00:33:59.504 Delete Endurance Group: Not Supported 00:33:59.504 Delete NVM Set: Not Supported 00:33:59.504 Extended LBA Formats Supported: Not Supported 00:33:59.504 Flexible Data Placement Supported: Not Supported 00:33:59.504 00:33:59.504 Controller Memory Buffer Support 00:33:59.504 ================================ 00:33:59.504 Supported: No 00:33:59.504 00:33:59.504 Persistent Memory Region Support 00:33:59.504 ================================ 00:33:59.504 Supported: No 00:33:59.504 00:33:59.504 Admin Command Set Attributes 00:33:59.504 ============================ 00:33:59.504 Security Send/Receive: Not Supported 00:33:59.504 Format NVM: Not Supported 00:33:59.504 Firmware Activate/Download: Not Supported 00:33:59.504 Namespace Management: Not Supported 00:33:59.504 Device Self-Test: Not Supported 00:33:59.504 Directives: Not Supported 00:33:59.504 NVMe-MI: Not Supported 00:33:59.504 Virtualization Management: Not Supported 00:33:59.504 Doorbell Buffer Config: Not Supported 00:33:59.504 Get LBA Status Capability: Not Supported 00:33:59.504 Command & Feature Lockdown Capability: Not Supported 00:33:59.504 Abort Command Limit: 4 00:33:59.504 Async Event Request Limit: 4 00:33:59.504 Number of Firmware Slots: N/A 00:33:59.504 Firmware Slot 1 Read-Only: N/A 00:33:59.504 Firmware Activation Without Reset: N/A 00:33:59.504 Multiple Update Detection Support: N/A 00:33:59.504 Firmware Update Granularity: No Information Provided 00:33:59.504 Per-Namespace SMART Log: No 00:33:59.504 Asymmetric Namespace Access Log Page: Not Supported 00:33:59.504 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:33:59.504 Command Effects Log Page: Supported 00:33:59.504 Get Log Page Extended Data: Supported 00:33:59.504 Telemetry Log Pages: Not Supported 00:33:59.504 Persistent Event Log Pages: Not Supported 00:33:59.504 Supported Log Pages Log Page: May Support 00:33:59.504 Commands Supported & Effects Log Page: Not Supported 00:33:59.504 Feature Identifiers & Effects Log Page:May Support 00:33:59.504 NVMe-MI Commands & Effects Log Page: May Support 00:33:59.504 Data Area 4 for Telemetry Log: Not Supported 00:33:59.504 Error Log Page Entries Supported: 128 00:33:59.504 Keep Alive: Supported 00:33:59.504 Keep Alive Granularity: 10000 ms 00:33:59.504 00:33:59.504 NVM Command Set Attributes 00:33:59.504 ========================== 00:33:59.504 Submission Queue Entry Size 00:33:59.504 Max: 64 00:33:59.504 Min: 64 00:33:59.504 Completion Queue Entry Size 00:33:59.504 Max: 16 00:33:59.504 Min: 16 00:33:59.504 Number of Namespaces: 32 00:33:59.504 Compare Command: Supported 00:33:59.504 Write Uncorrectable Command: Not Supported 00:33:59.504 Dataset Management Command: Supported 00:33:59.504 Write Zeroes Command: Supported 00:33:59.504 Set Features Save Field: Not Supported 00:33:59.504 Reservations: Supported 00:33:59.504 Timestamp: Not Supported 00:33:59.504 Copy: Supported 00:33:59.504 Volatile Write Cache: Present 00:33:59.504 Atomic Write Unit (Normal): 1 00:33:59.504 Atomic Write Unit (PFail): 1 00:33:59.504 Atomic Compare & Write Unit: 1 00:33:59.504 Fused Compare & Write: Supported 00:33:59.504 Scatter-Gather List 00:33:59.504 SGL Command Set: Supported 00:33:59.504 SGL Keyed: Supported 00:33:59.504 SGL Bit Bucket Descriptor: Not Supported 00:33:59.504 SGL Metadata Pointer: Not Supported 00:33:59.504 Oversized SGL: Not Supported 00:33:59.504 SGL Metadata Address: Not Supported 00:33:59.504 SGL Offset: Supported 00:33:59.504 Transport SGL Data Block: Not Supported 00:33:59.504 Replay Protected Memory Block: Not Supported 00:33:59.504 00:33:59.504 Firmware Slot Information 00:33:59.504 ========================= 00:33:59.504 Active slot: 1 00:33:59.504 Slot 1 Firmware Revision: 25.01 00:33:59.504 00:33:59.504 00:33:59.504 Commands Supported and Effects 00:33:59.504 ============================== 00:33:59.504 Admin Commands 00:33:59.504 -------------- 00:33:59.504 Get Log Page (02h): Supported 00:33:59.504 Identify (06h): Supported 00:33:59.504 Abort (08h): Supported 00:33:59.504 Set Features (09h): Supported 00:33:59.504 Get Features (0Ah): Supported 00:33:59.504 Asynchronous Event Request (0Ch): Supported 00:33:59.504 Keep Alive (18h): Supported 00:33:59.504 I/O Commands 00:33:59.504 ------------ 00:33:59.504 Flush (00h): Supported LBA-Change 00:33:59.504 Write (01h): Supported LBA-Change 00:33:59.504 Read (02h): Supported 00:33:59.504 Compare (05h): Supported 00:33:59.504 Write Zeroes (08h): Supported LBA-Change 00:33:59.504 Dataset Management (09h): Supported LBA-Change 00:33:59.504 Copy (19h): Supported LBA-Change 00:33:59.504 00:33:59.504 Error Log 00:33:59.504 ========= 00:33:59.504 00:33:59.504 Arbitration 00:33:59.504 =========== 00:33:59.504 Arbitration Burst: 1 00:33:59.504 00:33:59.504 Power Management 00:33:59.504 ================ 00:33:59.504 Number of Power States: 1 00:33:59.504 Current Power State: Power State #0 00:33:59.504 Power State #0: 00:33:59.504 Max Power: 0.00 W 00:33:59.504 Non-Operational State: Operational 00:33:59.504 Entry Latency: Not Reported 00:33:59.504 Exit Latency: Not Reported 00:33:59.504 Relative Read Throughput: 0 00:33:59.504 Relative Read Latency: 0 00:33:59.504 Relative Write Throughput: 0 00:33:59.504 Relative Write Latency: 0 00:33:59.504 Idle Power: Not Reported 00:33:59.504 Active Power: Not Reported 00:33:59.504 Non-Operational Permissive Mode: Not Supported 00:33:59.504 00:33:59.504 Health Information 00:33:59.504 ================== 00:33:59.504 Critical Warnings: 00:33:59.504 Available Spare Space: OK 00:33:59.504 Temperature: OK 00:33:59.504 Device Reliability: OK 00:33:59.504 Read Only: No 00:33:59.504 Volatile Memory Backup: OK 00:33:59.504 Current Temperature: 0 Kelvin (-273 Celsius) 00:33:59.505 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:33:59.505 Available Spare: 0% 00:33:59.505 Available Spare Threshold: 0% 00:33:59.505 Life Percentage Used:[2024-10-07 14:44:23.120509] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.120519] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000025600) 00:33:59.505 [2024-10-07 14:44:23.120533] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.505 [2024-10-07 14:44:23.120551] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:33:59.505 [2024-10-07 14:44:23.120731] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.505 [2024-10-07 14:44:23.120742] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.505 [2024-10-07 14:44:23.120748] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.120755] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000025600 00:33:59.505 [2024-10-07 14:44:23.120803] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:33:59.505 [2024-10-07 14:44:23.120819] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000025600 00:33:59.505 [2024-10-07 14:44:23.120831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.505 [2024-10-07 14:44:23.120840] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000025600 00:33:59.505 [2024-10-07 14:44:23.120849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.505 [2024-10-07 14:44:23.120856] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000025600 00:33:59.505 [2024-10-07 14:44:23.120864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.505 [2024-10-07 14:44:23.120872] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:33:59.505 [2024-10-07 14:44:23.120880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:59.505 [2024-10-07 14:44:23.120892] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.120899] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.120910] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:33:59.505 [2024-10-07 14:44:23.120922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.505 [2024-10-07 14:44:23.120940] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:33:59.505 [2024-10-07 14:44:23.121110] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.505 [2024-10-07 14:44:23.121121] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.505 [2024-10-07 14:44:23.121130] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.121137] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:33:59.505 [2024-10-07 14:44:23.121152] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.121159] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.121166] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:33:59.505 [2024-10-07 14:44:23.121178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.505 [2024-10-07 14:44:23.121197] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:33:59.505 [2024-10-07 14:44:23.121407] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.505 [2024-10-07 14:44:23.121416] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.505 [2024-10-07 14:44:23.121422] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.121428] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:33:59.505 [2024-10-07 14:44:23.121437] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:33:59.505 [2024-10-07 14:44:23.121444] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:33:59.505 [2024-10-07 14:44:23.121459] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.121469] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.121475] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:33:59.505 [2024-10-07 14:44:23.121491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.505 [2024-10-07 14:44:23.121506] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:33:59.505 [2024-10-07 14:44:23.121655] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.505 [2024-10-07 14:44:23.121664] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.505 [2024-10-07 14:44:23.121669] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.121675] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:33:59.505 [2024-10-07 14:44:23.121691] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.121697] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.121703] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:33:59.505 [2024-10-07 14:44:23.121714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.505 [2024-10-07 14:44:23.121727] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:33:59.505 [2024-10-07 14:44:23.121923] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.505 [2024-10-07 14:44:23.121932] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.505 [2024-10-07 14:44:23.121938] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.121944] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:33:59.505 [2024-10-07 14:44:23.121958] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.121964] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.121970] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:33:59.505 [2024-10-07 14:44:23.121981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.505 [2024-10-07 14:44:23.121994] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:33:59.505 [2024-10-07 14:44:23.122195] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.505 [2024-10-07 14:44:23.122205] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.505 [2024-10-07 14:44:23.122211] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.122217] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:33:59.505 [2024-10-07 14:44:23.122230] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.122237] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.122243] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:33:59.505 [2024-10-07 14:44:23.122253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.505 [2024-10-07 14:44:23.122267] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:33:59.505 [2024-10-07 14:44:23.122427] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.505 [2024-10-07 14:44:23.122436] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.505 [2024-10-07 14:44:23.122442] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.122448] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:33:59.505 [2024-10-07 14:44:23.122462] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.122468] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.122474] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:33:59.505 [2024-10-07 14:44:23.122484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.505 [2024-10-07 14:44:23.122498] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:33:59.505 [2024-10-07 14:44:23.122668] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.505 [2024-10-07 14:44:23.122677] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.505 [2024-10-07 14:44:23.122682] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.122689] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:33:59.505 [2024-10-07 14:44:23.122702] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.122709] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.505 [2024-10-07 14:44:23.122715] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:33:59.505 [2024-10-07 14:44:23.122725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.505 [2024-10-07 14:44:23.122738] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:33:59.505 [2024-10-07 14:44:23.122938] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.505 [2024-10-07 14:44:23.122947] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.506 [2024-10-07 14:44:23.122952] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.506 [2024-10-07 14:44:23.122959] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:33:59.506 [2024-10-07 14:44:23.122972] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:33:59.506 [2024-10-07 14:44:23.122978] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:33:59.506 [2024-10-07 14:44:23.122984] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000025600) 00:33:59.506 [2024-10-07 14:44:23.122994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.506 [2024-10-07 14:44:23.127025] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:33:59.506 [2024-10-07 14:44:23.127226] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:33:59.506 [2024-10-07 14:44:23.127236] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:33:59.506 [2024-10-07 14:44:23.127242] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:33:59.506 [2024-10-07 14:44:23.127248] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000025600 00:33:59.506 [2024-10-07 14:44:23.127261] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:33:59.506 0% 00:33:59.506 Data Units Read: 0 00:33:59.506 Data Units Written: 0 00:33:59.506 Host Read Commands: 0 00:33:59.506 Host Write Commands: 0 00:33:59.506 Controller Busy Time: 0 minutes 00:33:59.506 Power Cycles: 0 00:33:59.506 Power On Hours: 0 hours 00:33:59.506 Unsafe Shutdowns: 0 00:33:59.506 Unrecoverable Media Errors: 0 00:33:59.506 Lifetime Error Log Entries: 0 00:33:59.506 Warning Temperature Time: 0 minutes 00:33:59.506 Critical Temperature Time: 0 minutes 00:33:59.506 00:33:59.506 Number of Queues 00:33:59.506 ================ 00:33:59.506 Number of I/O Submission Queues: 127 00:33:59.506 Number of I/O Completion Queues: 127 00:33:59.506 00:33:59.506 Active Namespaces 00:33:59.506 ================= 00:33:59.506 Namespace ID:1 00:33:59.506 Error Recovery Timeout: Unlimited 00:33:59.506 Command Set Identifier: NVM (00h) 00:33:59.506 Deallocate: Supported 00:33:59.506 Deallocated/Unwritten Error: Not Supported 00:33:59.506 Deallocated Read Value: Unknown 00:33:59.506 Deallocate in Write Zeroes: Not Supported 00:33:59.506 Deallocated Guard Field: 0xFFFF 00:33:59.506 Flush: Supported 00:33:59.506 Reservation: Supported 00:33:59.506 Namespace Sharing Capabilities: Multiple Controllers 00:33:59.506 Size (in LBAs): 131072 (0GiB) 00:33:59.506 Capacity (in LBAs): 131072 (0GiB) 00:33:59.506 Utilization (in LBAs): 131072 (0GiB) 00:33:59.506 NGUID: ABCDEF0123456789ABCDEF0123456789 00:33:59.506 EUI64: ABCDEF0123456789 00:33:59.506 UUID: 5b1eea9c-4a13-4dc2-987f-a51b0d55323a 00:33:59.506 Thin Provisioning: Not Supported 00:33:59.506 Per-NS Atomic Units: Yes 00:33:59.506 Atomic Boundary Size (Normal): 0 00:33:59.506 Atomic Boundary Size (PFail): 0 00:33:59.506 Atomic Boundary Offset: 0 00:33:59.506 Maximum Single Source Range Length: 65535 00:33:59.506 Maximum Copy Length: 65535 00:33:59.506 Maximum Source Range Count: 1 00:33:59.506 NGUID/EUI64 Never Reused: No 00:33:59.506 Namespace Write Protected: No 00:33:59.506 Number of LBA Formats: 1 00:33:59.506 Current LBA Format: LBA Format #00 00:33:59.506 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:59.506 00:33:59.506 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:33:59.506 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:59.506 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.506 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:33:59.506 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.506 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:33:59.506 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:33:59.506 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:59.506 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:33:59.506 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:59.506 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:33:59.506 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:59.506 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:59.506 rmmod nvme_tcp 00:33:59.767 rmmod nvme_fabrics 00:33:59.767 rmmod nvme_keyring 00:33:59.767 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:59.767 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:33:59.767 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:33:59.767 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 3190802 ']' 00:33:59.767 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 3190802 00:33:59.767 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3190802 ']' 00:33:59.767 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3190802 00:33:59.767 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:33:59.767 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:59.767 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3190802 00:33:59.767 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:59.767 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:59.767 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3190802' 00:33:59.767 killing process with pid 3190802 00:33:59.767 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3190802 00:33:59.767 14:44:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3190802 00:34:00.709 14:44:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:00.709 14:44:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:00.709 14:44:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:00.709 14:44:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:34:00.709 14:44:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:34:00.709 14:44:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:00.709 14:44:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:34:00.709 14:44:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:00.709 14:44:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:00.709 14:44:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.709 14:44:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:00.709 14:44:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:03.255 00:34:03.255 real 0m12.623s 00:34:03.255 user 0m11.129s 00:34:03.255 sys 0m6.238s 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:34:03.255 ************************************ 00:34:03.255 END TEST nvmf_identify 00:34:03.255 ************************************ 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.255 ************************************ 00:34:03.255 START TEST nvmf_perf 00:34:03.255 ************************************ 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:34:03.255 * Looking for test storage... 00:34:03.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:34:03.255 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:34:03.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.256 --rc genhtml_branch_coverage=1 00:34:03.256 --rc genhtml_function_coverage=1 00:34:03.256 --rc genhtml_legend=1 00:34:03.256 --rc geninfo_all_blocks=1 00:34:03.256 --rc geninfo_unexecuted_blocks=1 00:34:03.256 00:34:03.256 ' 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:34:03.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.256 --rc genhtml_branch_coverage=1 00:34:03.256 --rc genhtml_function_coverage=1 00:34:03.256 --rc genhtml_legend=1 00:34:03.256 --rc geninfo_all_blocks=1 00:34:03.256 --rc geninfo_unexecuted_blocks=1 00:34:03.256 00:34:03.256 ' 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:34:03.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.256 --rc genhtml_branch_coverage=1 00:34:03.256 --rc genhtml_function_coverage=1 00:34:03.256 --rc genhtml_legend=1 00:34:03.256 --rc geninfo_all_blocks=1 00:34:03.256 --rc geninfo_unexecuted_blocks=1 00:34:03.256 00:34:03.256 ' 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:34:03.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.256 --rc genhtml_branch_coverage=1 00:34:03.256 --rc genhtml_function_coverage=1 00:34:03.256 --rc genhtml_legend=1 00:34:03.256 --rc geninfo_all_blocks=1 00:34:03.256 --rc geninfo_unexecuted_blocks=1 00:34:03.256 00:34:03.256 ' 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:03.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:34:03.256 14:44:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:11.402 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:11.402 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:34:11.402 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:11.402 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:11.402 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:11.403 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:11.403 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:11.403 Found net devices under 0000:31:00.0: cvl_0_0 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:11.403 Found net devices under 0000:31:00.1: cvl_0_1 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:11.403 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:11.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:11.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:34:11.404 00:34:11.404 --- 10.0.0.2 ping statistics --- 00:34:11.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.404 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:11.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:11.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:34:11.404 00:34:11.404 --- 10.0.0.1 ping statistics --- 00:34:11.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.404 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=3195546 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 3195546 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3195546 ']' 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:11.404 14:44:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:34:11.404 [2024-10-07 14:44:34.090882] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:34:11.404 [2024-10-07 14:44:34.091018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:11.404 [2024-10-07 14:44:34.230283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:11.404 [2024-10-07 14:44:34.416192] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:11.404 [2024-10-07 14:44:34.416235] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:11.404 [2024-10-07 14:44:34.416246] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:11.404 [2024-10-07 14:44:34.416260] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:11.404 [2024-10-07 14:44:34.416269] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:11.404 [2024-10-07 14:44:34.418730] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:34:11.404 [2024-10-07 14:44:34.418812] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:34:11.404 [2024-10-07 14:44:34.418928] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.404 [2024-10-07 14:44:34.418950] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:34:11.404 14:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:11.404 14:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:34:11.404 14:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:11.404 14:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:11.404 14:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:34:11.404 14:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:11.404 14:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:11.404 14:44:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:34:11.975 14:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:34:11.975 14:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:34:11.975 14:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:34:11.975 14:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:12.236 14:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:34:12.236 14:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:34:12.236 14:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:34:12.236 14:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:34:12.236 14:44:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:34:12.497 [2024-10-07 14:44:36.006665] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.497 14:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:12.758 14:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:34:12.758 14:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:12.758 14:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:34:12.758 14:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:13.019 14:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:13.280 [2024-10-07 14:44:36.733318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:13.280 14:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:13.280 14:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:34:13.280 14:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:34:13.280 14:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:34:13.280 14:44:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:34:14.667 Initializing NVMe Controllers 00:34:14.667 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:34:14.667 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:34:14.667 Initialization complete. Launching workers. 00:34:14.667 ======================================================== 00:34:14.667 Latency(us) 00:34:14.667 Device Information : IOPS MiB/s Average min max 00:34:14.667 PCIE (0000:65:00.0) NSID 1 from core 0: 73090.24 285.51 437.03 15.64 5039.70 00:34:14.667 ======================================================== 00:34:14.667 Total : 73090.24 285.51 437.03 15.64 5039.70 00:34:14.667 00:34:14.927 14:44:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:16.312 Initializing NVMe Controllers 00:34:16.312 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:16.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:16.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:16.312 Initialization complete. Launching workers. 00:34:16.312 ======================================================== 00:34:16.312 Latency(us) 00:34:16.312 Device Information : IOPS MiB/s Average min max 00:34:16.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 62.00 0.24 16219.05 261.79 45599.96 00:34:16.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 68.00 0.27 15022.11 7954.06 47913.49 00:34:16.312 ======================================================== 00:34:16.312 Total : 130.00 0.51 15592.96 261.79 47913.49 00:34:16.312 00:34:16.312 14:44:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:17.702 Initializing NVMe Controllers 00:34:17.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:17.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:17.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:17.702 Initialization complete. Launching workers. 00:34:17.702 ======================================================== 00:34:17.702 Latency(us) 00:34:17.702 Device Information : IOPS MiB/s Average min max 00:34:17.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9558.59 37.34 3348.86 538.04 6948.52 00:34:17.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3845.43 15.02 8364.29 6684.57 15964.00 00:34:17.702 ======================================================== 00:34:17.702 Total : 13404.02 52.36 4787.72 538.04 15964.00 00:34:17.702 00:34:17.963 14:44:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:34:17.963 14:44:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:34:17.963 14:44:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:20.505 Initializing NVMe Controllers 00:34:20.505 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:20.505 Controller IO queue size 128, less than required. 00:34:20.505 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:20.505 Controller IO queue size 128, less than required. 00:34:20.505 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:20.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:20.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:20.505 Initialization complete. Launching workers. 00:34:20.505 ======================================================== 00:34:20.505 Latency(us) 00:34:20.505 Device Information : IOPS MiB/s Average min max 00:34:20.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1365.50 341.37 97132.57 60376.95 236926.94 00:34:20.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 536.50 134.12 253606.14 110189.42 406314.01 00:34:20.505 ======================================================== 00:34:20.505 Total : 1902.00 475.50 141269.30 60376.95 406314.01 00:34:20.505 00:34:20.505 14:44:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:34:20.766 No valid NVMe controllers or AIO or URING devices found 00:34:20.766 Initializing NVMe Controllers 00:34:20.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:20.766 Controller IO queue size 128, less than required. 00:34:20.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:20.766 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:34:20.766 Controller IO queue size 128, less than required. 00:34:20.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:20.766 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:34:20.766 WARNING: Some requested NVMe devices were skipped 00:34:20.766 14:44:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:34:24.064 Initializing NVMe Controllers 00:34:24.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:24.064 Controller IO queue size 128, less than required. 00:34:24.064 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:24.064 Controller IO queue size 128, less than required. 00:34:24.064 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:24.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:24.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:24.064 Initialization complete. Launching workers. 00:34:24.064 00:34:24.064 ==================== 00:34:24.064 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:34:24.064 TCP transport: 00:34:24.064 polls: 13455 00:34:24.064 idle_polls: 6303 00:34:24.064 sock_completions: 7152 00:34:24.064 nvme_completions: 5815 00:34:24.064 submitted_requests: 8714 00:34:24.064 queued_requests: 1 00:34:24.064 00:34:24.064 ==================== 00:34:24.064 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:34:24.064 TCP transport: 00:34:24.064 polls: 16635 00:34:24.064 idle_polls: 9581 00:34:24.064 sock_completions: 7054 00:34:24.064 nvme_completions: 6031 00:34:24.064 submitted_requests: 9080 00:34:24.064 queued_requests: 1 00:34:24.064 ======================================================== 00:34:24.064 Latency(us) 00:34:24.064 Device Information : IOPS MiB/s Average min max 00:34:24.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1453.50 363.37 92294.06 50504.68 319085.60 00:34:24.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1507.50 376.87 86024.59 51138.14 312929.93 00:34:24.064 ======================================================== 00:34:24.064 Total : 2961.00 740.25 89102.16 50504.68 319085.60 00:34:24.064 00:34:24.064 14:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:34:24.064 14:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:24.064 14:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:34:24.064 14:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:34:24.064 14:44:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:34:25.019 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=fd7f694d-0ad5-428d-b08e-30bf788144dd 00:34:25.019 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb fd7f694d-0ad5-428d-b08e-30bf788144dd 00:34:25.019 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=fd7f694d-0ad5-428d-b08e-30bf788144dd 00:34:25.019 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:34:25.019 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:34:25.019 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:34:25.019 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:25.019 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:34:25.019 { 00:34:25.019 "uuid": "fd7f694d-0ad5-428d-b08e-30bf788144dd", 00:34:25.019 "name": "lvs_0", 00:34:25.019 "base_bdev": "Nvme0n1", 00:34:25.019 "total_data_clusters": 457407, 00:34:25.019 "free_clusters": 457407, 00:34:25.019 "block_size": 512, 00:34:25.019 "cluster_size": 4194304 00:34:25.019 } 00:34:25.019 ]' 00:34:25.019 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="fd7f694d-0ad5-428d-b08e-30bf788144dd") .free_clusters' 00:34:25.019 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=457407 00:34:25.019 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="fd7f694d-0ad5-428d-b08e-30bf788144dd") .cluster_size' 00:34:25.280 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:34:25.280 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=1829628 00:34:25.280 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 1829628 00:34:25.280 1829628 00:34:25.280 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:34:25.280 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:34:25.280 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fd7f694d-0ad5-428d-b08e-30bf788144dd lbd_0 20480 00:34:25.280 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=9663a6c6-7c04-4d15-9428-ce1a7c74a982 00:34:25.280 14:44:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 9663a6c6-7c04-4d15-9428-ce1a7c74a982 lvs_n_0 00:34:27.276 14:44:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=682df7d6-1cfe-481f-bcca-343f2ddd6177 00:34:27.276 14:44:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 682df7d6-1cfe-481f-bcca-343f2ddd6177 00:34:27.276 14:44:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=682df7d6-1cfe-481f-bcca-343f2ddd6177 00:34:27.276 14:44:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:34:27.276 14:44:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:34:27.276 14:44:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:34:27.276 14:44:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:27.276 14:44:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:34:27.277 { 00:34:27.277 "uuid": "fd7f694d-0ad5-428d-b08e-30bf788144dd", 00:34:27.277 "name": "lvs_0", 00:34:27.277 "base_bdev": "Nvme0n1", 00:34:27.277 "total_data_clusters": 457407, 00:34:27.277 "free_clusters": 452287, 00:34:27.277 "block_size": 512, 00:34:27.277 "cluster_size": 4194304 00:34:27.277 }, 00:34:27.277 { 00:34:27.277 "uuid": "682df7d6-1cfe-481f-bcca-343f2ddd6177", 00:34:27.277 "name": "lvs_n_0", 00:34:27.277 "base_bdev": "9663a6c6-7c04-4d15-9428-ce1a7c74a982", 00:34:27.277 "total_data_clusters": 5114, 00:34:27.277 "free_clusters": 5114, 00:34:27.277 "block_size": 512, 00:34:27.277 "cluster_size": 4194304 00:34:27.277 } 00:34:27.277 ]' 00:34:27.277 14:44:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="682df7d6-1cfe-481f-bcca-343f2ddd6177") .free_clusters' 00:34:27.277 14:44:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:34:27.277 14:44:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="682df7d6-1cfe-481f-bcca-343f2ddd6177") .cluster_size' 00:34:27.277 14:44:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:34:27.277 14:44:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:34:27.277 14:44:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:34:27.277 20456 00:34:27.277 14:44:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:34:27.277 14:44:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 682df7d6-1cfe-481f-bcca-343f2ddd6177 lbd_nest_0 20456 00:34:27.575 14:44:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=b04a6c53-b6e0-4a12-8867-afa949a732cb 00:34:27.575 14:44:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:27.834 14:44:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:34:27.834 14:44:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b04a6c53-b6e0-4a12-8867-afa949a732cb 00:34:27.834 14:44:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:28.094 14:44:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:34:28.094 14:44:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:34:28.094 14:44:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:34:28.094 14:44:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:34:28.094 14:44:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:40.321 Initializing NVMe Controllers 00:34:40.321 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:40.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:40.321 Initialization complete. Launching workers. 00:34:40.321 ======================================================== 00:34:40.321 Latency(us) 00:34:40.321 Device Information : IOPS MiB/s Average min max 00:34:40.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.40 0.02 22080.57 240.65 49505.07 00:34:40.321 ======================================================== 00:34:40.321 Total : 45.40 0.02 22080.57 240.65 49505.07 00:34:40.321 00:34:40.321 14:45:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:34:40.321 14:45:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:50.316 Initializing NVMe Controllers 00:34:50.316 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:50.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:50.316 Initialization complete. Launching workers. 00:34:50.316 ======================================================== 00:34:50.316 Latency(us) 00:34:50.316 Device Information : IOPS MiB/s Average min max 00:34:50.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 57.60 7.20 17371.63 5491.81 51892.52 00:34:50.316 ======================================================== 00:34:50.316 Total : 57.60 7.20 17371.63 5491.81 51892.52 00:34:50.316 00:34:50.316 14:45:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:34:50.316 14:45:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:34:50.317 14:45:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:00.315 Initializing NVMe Controllers 00:35:00.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:00.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:00.315 Initialization complete. Launching workers. 00:35:00.315 ======================================================== 00:35:00.315 Latency(us) 00:35:00.315 Device Information : IOPS MiB/s Average min max 00:35:00.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8533.00 4.17 3751.50 538.20 9916.41 00:35:00.315 ======================================================== 00:35:00.315 Total : 8533.00 4.17 3751.50 538.20 9916.41 00:35:00.315 00:35:00.315 14:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:35:00.315 14:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:10.311 Initializing NVMe Controllers 00:35:10.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:10.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:10.311 Initialization complete. Launching workers. 00:35:10.311 ======================================================== 00:35:10.311 Latency(us) 00:35:10.311 Device Information : IOPS MiB/s Average min max 00:35:10.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3504.70 438.09 9136.22 666.53 22811.15 00:35:10.311 ======================================================== 00:35:10.311 Total : 3504.70 438.09 9136.22 666.53 22811.15 00:35:10.311 00:35:10.311 14:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:35:10.311 14:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:35:10.311 14:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:20.305 Initializing NVMe Controllers 00:35:20.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:20.305 Controller IO queue size 128, less than required. 00:35:20.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:20.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:20.305 Initialization complete. Launching workers. 00:35:20.305 ======================================================== 00:35:20.305 Latency(us) 00:35:20.305 Device Information : IOPS MiB/s Average min max 00:35:20.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15605.42 7.62 8202.13 2009.26 22851.56 00:35:20.305 ======================================================== 00:35:20.305 Total : 15605.42 7.62 8202.13 2009.26 22851.56 00:35:20.305 00:35:20.305 14:45:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:35:20.305 14:45:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:32.545 Initializing NVMe Controllers 00:35:32.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:32.545 Controller IO queue size 128, less than required. 00:35:32.545 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:32.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:32.545 Initialization complete. Launching workers. 00:35:32.545 ======================================================== 00:35:32.545 Latency(us) 00:35:32.545 Device Information : IOPS MiB/s Average min max 00:35:32.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1171.22 146.40 109521.60 15803.20 241488.06 00:35:32.545 ======================================================== 00:35:32.545 Total : 1171.22 146.40 109521.60 15803.20 241488.06 00:35:32.545 00:35:32.545 14:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:32.545 14:45:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b04a6c53-b6e0-4a12-8867-afa949a732cb 00:35:32.545 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:35:32.805 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9663a6c6-7c04-4d15-9428-ce1a7c74a982 00:35:32.805 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:35:33.065 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:35:33.065 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:35:33.065 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:33.065 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:35:33.065 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:33.065 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:35:33.065 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:33.065 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:33.065 rmmod nvme_tcp 00:35:33.065 rmmod nvme_fabrics 00:35:33.065 rmmod nvme_keyring 00:35:33.065 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:33.065 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:35:33.065 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:35:33.065 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 3195546 ']' 00:35:33.065 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 3195546 00:35:33.065 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3195546 ']' 00:35:33.065 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3195546 00:35:33.065 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:35:33.065 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:33.065 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3195546 00:35:33.326 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:33.326 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:33.326 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3195546' 00:35:33.326 killing process with pid 3195546 00:35:33.326 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3195546 00:35:33.326 14:45:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3195546 00:35:35.866 14:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:35.866 14:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:35.866 14:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:35.866 14:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:35:35.866 14:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:35:35.866 14:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:35.866 14:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:35:35.866 14:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:35.866 14:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:35.866 14:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:35.866 14:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:35.866 14:45:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:38.411 00:35:38.411 real 1m35.093s 00:35:38.411 user 5m35.372s 00:35:38.411 sys 0m15.955s 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:35:38.411 ************************************ 00:35:38.411 END TEST nvmf_perf 00:35:38.411 ************************************ 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.411 ************************************ 00:35:38.411 START TEST nvmf_fio_host 00:35:38.411 ************************************ 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:35:38.411 * Looking for test storage... 00:35:38.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:35:38.411 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:35:38.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.412 --rc genhtml_branch_coverage=1 00:35:38.412 --rc genhtml_function_coverage=1 00:35:38.412 --rc genhtml_legend=1 00:35:38.412 --rc geninfo_all_blocks=1 00:35:38.412 --rc geninfo_unexecuted_blocks=1 00:35:38.412 00:35:38.412 ' 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:35:38.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.412 --rc genhtml_branch_coverage=1 00:35:38.412 --rc genhtml_function_coverage=1 00:35:38.412 --rc genhtml_legend=1 00:35:38.412 --rc geninfo_all_blocks=1 00:35:38.412 --rc geninfo_unexecuted_blocks=1 00:35:38.412 00:35:38.412 ' 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:35:38.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.412 --rc genhtml_branch_coverage=1 00:35:38.412 --rc genhtml_function_coverage=1 00:35:38.412 --rc genhtml_legend=1 00:35:38.412 --rc geninfo_all_blocks=1 00:35:38.412 --rc geninfo_unexecuted_blocks=1 00:35:38.412 00:35:38.412 ' 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:35:38.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:38.412 --rc genhtml_branch_coverage=1 00:35:38.412 --rc genhtml_function_coverage=1 00:35:38.412 --rc genhtml_legend=1 00:35:38.412 --rc geninfo_all_blocks=1 00:35:38.412 --rc geninfo_unexecuted_blocks=1 00:35:38.412 00:35:38.412 ' 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:38.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:38.412 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:38.413 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:38.413 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:38.413 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:38.413 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:38.413 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:38.413 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:38.413 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:38.413 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:38.413 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:35:38.413 14:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.554 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:46.554 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:35:46.554 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:46.554 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:46.554 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:46.554 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:46.554 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:46.555 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:46.555 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:46.555 Found net devices under 0000:31:00.0: cvl_0_0 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:46.555 Found net devices under 0000:31:00.1: cvl_0_1 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:46.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:46.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:35:46.555 00:35:46.555 --- 10.0.0.2 ping statistics --- 00:35:46.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.555 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:46.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:46.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:35:46.555 00:35:46.555 --- 10.0.0.1 ping statistics --- 00:35:46.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.555 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3216590 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3216590 00:35:46.555 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3216590 ']' 00:35:46.556 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:46.556 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:46.556 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:46.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:46.556 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:46.556 14:46:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.556 [2024-10-07 14:46:09.507515] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:35:46.556 [2024-10-07 14:46:09.507620] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:46.556 [2024-10-07 14:46:09.635658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:46.556 [2024-10-07 14:46:09.815612] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:46.556 [2024-10-07 14:46:09.815663] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:46.556 [2024-10-07 14:46:09.815675] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:46.556 [2024-10-07 14:46:09.815688] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:46.556 [2024-10-07 14:46:09.815697] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:46.556 [2024-10-07 14:46:09.817986] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.556 [2024-10-07 14:46:09.818072] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:35:46.556 [2024-10-07 14:46:09.818171] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:46.556 [2024-10-07 14:46:09.818193] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:35:46.817 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:46.817 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:35:46.817 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:46.817 [2024-10-07 14:46:10.422923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:46.817 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:35:46.817 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:46.817 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:35:46.817 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:35:47.079 Malloc1 00:35:47.079 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:47.340 14:46:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:47.600 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:47.600 [2024-10-07 14:46:11.251899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:47.600 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:47.878 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:47.878 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:47.878 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:47.878 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:47.878 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:47.878 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:47.878 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:47.878 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:35:47.878 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:47.878 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:47.878 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:35:47.878 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:47.878 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:47.878 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:47.878 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:47.878 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:35:47.878 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:35:47.878 14:46:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:48.445 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:35:48.445 fio-3.35 00:35:48.445 Starting 1 thread 00:35:50.988 00:35:50.988 test: (groupid=0, jobs=1): err= 0: pid=3217135: Mon Oct 7 14:46:14 2024 00:35:50.988 read: IOPS=8560, BW=33.4MiB/s (35.1MB/s)(67.1MiB/2007msec) 00:35:50.988 slat (usec): min=2, max=244, avg= 2.32, stdev= 2.66 00:35:50.988 clat (usec): min=3474, max=14077, avg=8241.60, stdev=612.15 00:35:50.988 lat (usec): min=3515, max=14079, avg=8243.91, stdev=611.95 00:35:50.988 clat percentiles (usec): 00:35:50.988 | 1.00th=[ 6849], 5.00th=[ 7308], 10.00th=[ 7504], 20.00th=[ 7767], 00:35:50.988 | 30.00th=[ 7963], 40.00th=[ 8094], 50.00th=[ 8225], 60.00th=[ 8455], 00:35:50.988 | 70.00th=[ 8586], 80.00th=[ 8717], 90.00th=[ 8979], 95.00th=[ 9110], 00:35:50.988 | 99.00th=[ 9634], 99.50th=[ 9765], 99.90th=[12125], 99.95th=[13304], 00:35:50.988 | 99.99th=[14091] 00:35:50.988 bw ( KiB/s): min=33016, max=34888, per=99.91%, avg=34210.00, stdev=821.35, samples=4 00:35:50.988 iops : min= 8254, max= 8722, avg=8552.50, stdev=205.34, samples=4 00:35:50.988 write: IOPS=8557, BW=33.4MiB/s (35.0MB/s)(67.1MiB/2007msec); 0 zone resets 00:35:50.988 slat (usec): min=2, max=240, avg= 2.39, stdev= 2.04 00:35:50.988 clat (usec): min=2621, max=13042, avg=6631.04, stdev=524.70 00:35:50.988 lat (usec): min=2638, max=13044, avg=6633.43, stdev=524.54 00:35:50.988 clat percentiles (usec): 00:35:50.988 | 1.00th=[ 5473], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6259], 00:35:50.988 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6718], 00:35:50.988 | 70.00th=[ 6849], 80.00th=[ 7046], 90.00th=[ 7177], 95.00th=[ 7373], 00:35:50.988 | 99.00th=[ 7701], 99.50th=[ 7898], 99.90th=[11600], 99.95th=[12256], 00:35:50.988 | 99.99th=[13042] 00:35:50.988 bw ( KiB/s): min=34048, max=34432, per=100.00%, avg=34244.00, stdev=217.18, samples=4 00:35:50.988 iops : min= 8512, max= 8608, avg=8561.00, stdev=54.30, samples=4 00:35:50.988 lat (msec) : 4=0.07%, 10=99.70%, 20=0.23% 00:35:50.988 cpu : usr=73.38%, sys=25.37%, ctx=39, majf=0, minf=1538 00:35:50.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:35:50.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:50.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:50.988 issued rwts: total=17180,17174,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:50.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:50.988 00:35:50.988 Run status group 0 (all jobs): 00:35:50.988 READ: bw=33.4MiB/s (35.1MB/s), 33.4MiB/s-33.4MiB/s (35.1MB/s-35.1MB/s), io=67.1MiB (70.4MB), run=2007-2007msec 00:35:50.988 WRITE: bw=33.4MiB/s (35.0MB/s), 33.4MiB/s-33.4MiB/s (35.0MB/s-35.0MB/s), io=67.1MiB (70.3MB), run=2007-2007msec 00:35:50.988 ----------------------------------------------------- 00:35:50.988 Suppressions used: 00:35:50.988 count bytes template 00:35:50.988 1 57 /usr/src/fio/parse.c 00:35:50.988 1 8 libtcmalloc_minimal.so 00:35:50.988 ----------------------------------------------------- 00:35:50.988 00:35:50.988 14:46:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:35:50.988 14:46:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:35:50.988 14:46:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:50.988 14:46:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:50.988 14:46:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:50.988 14:46:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:50.988 14:46:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:35:50.988 14:46:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:50.988 14:46:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:50.988 14:46:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:50.988 14:46:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:35:50.988 14:46:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:50.988 14:46:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:50.988 14:46:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:50.988 14:46:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:35:50.988 14:46:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:35:50.988 14:46:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:35:51.249 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:35:51.249 fio-3.35 00:35:51.249 Starting 1 thread 00:35:53.790 00:35:53.790 test: (groupid=0, jobs=1): err= 0: pid=3217950: Mon Oct 7 14:46:17 2024 00:35:53.790 read: IOPS=8639, BW=135MiB/s (142MB/s)(270MiB/2002msec) 00:35:53.790 slat (usec): min=3, max=122, avg= 3.87, stdev= 1.55 00:35:53.790 clat (usec): min=688, max=19811, avg=8594.54, stdev=1941.43 00:35:53.790 lat (usec): min=696, max=19815, avg=8598.42, stdev=1941.55 00:35:53.790 clat percentiles (usec): 00:35:53.790 | 1.00th=[ 4555], 5.00th=[ 5473], 10.00th=[ 6128], 20.00th=[ 6849], 00:35:53.790 | 30.00th=[ 7439], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[ 9110], 00:35:53.790 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11207], 95.00th=[11731], 00:35:53.790 | 99.00th=[13042], 99.50th=[13829], 99.90th=[14484], 99.95th=[14746], 00:35:53.790 | 99.99th=[15401] 00:35:53.790 bw ( KiB/s): min=60576, max=84256, per=52.54%, avg=72624.00, stdev=13008.27, samples=4 00:35:53.790 iops : min= 3786, max= 5266, avg=4539.00, stdev=813.02, samples=4 00:35:53.790 write: IOPS=5317, BW=83.1MiB/s (87.1MB/s)(148MiB/1777msec); 0 zone resets 00:35:53.790 slat (usec): min=40, max=390, avg=41.76, stdev= 7.38 00:35:53.790 clat (usec): min=3906, max=17216, avg=10233.29, stdev=1708.38 00:35:53.790 lat (usec): min=3946, max=17257, avg=10275.05, stdev=1709.61 00:35:53.791 clat percentiles (usec): 00:35:53.791 | 1.00th=[ 7177], 5.00th=[ 7963], 10.00th=[ 8225], 20.00th=[ 8717], 00:35:53.791 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10421], 00:35:53.791 | 70.00th=[10945], 80.00th=[11469], 90.00th=[12518], 95.00th=[13566], 00:35:53.791 | 99.00th=[15008], 99.50th=[15533], 99.90th=[16581], 99.95th=[16909], 00:35:53.791 | 99.99th=[17171] 00:35:53.791 bw ( KiB/s): min=63008, max=87584, per=88.85%, avg=75600.00, stdev=13544.75, samples=4 00:35:53.791 iops : min= 3938, max= 5474, avg=4725.00, stdev=846.55, samples=4 00:35:53.791 lat (usec) : 750=0.01% 00:35:53.791 lat (msec) : 4=0.16%, 10=64.75%, 20=35.09% 00:35:53.791 cpu : usr=86.76%, sys=12.14%, ctx=13, majf=0, minf=2337 00:35:53.791 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:35:53.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:53.791 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:53.791 issued rwts: total=17296,9450,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:53.791 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:53.791 00:35:53.791 Run status group 0 (all jobs): 00:35:53.791 READ: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=270MiB (283MB), run=2002-2002msec 00:35:53.791 WRITE: bw=83.1MiB/s (87.1MB/s), 83.1MiB/s-83.1MiB/s (87.1MB/s-87.1MB/s), io=148MiB (155MB), run=1777-1777msec 00:35:54.052 ----------------------------------------------------- 00:35:54.052 Suppressions used: 00:35:54.052 count bytes template 00:35:54.052 1 57 /usr/src/fio/parse.c 00:35:54.052 1232 118272 /usr/src/fio/iolog.c 00:35:54.052 1 8 libtcmalloc_minimal.so 00:35:54.052 ----------------------------------------------------- 00:35:54.052 00:35:54.052 14:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:54.052 14:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:35:54.052 14:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:35:54.052 14:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:35:54.052 14:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:35:54.053 14:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:35:54.053 14:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:54.053 14:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:54.053 14:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:35:54.313 14:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:35:54.313 14:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:35:54.313 14:46:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:35:54.880 Nvme0n1 00:35:54.880 14:46:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:35:55.449 14:46:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=97f5c7a7-c0fe-4397-b37b-516cadb3e20c 00:35:55.449 14:46:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 97f5c7a7-c0fe-4397-b37b-516cadb3e20c 00:35:55.449 14:46:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=97f5c7a7-c0fe-4397-b37b-516cadb3e20c 00:35:55.449 14:46:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:35:55.449 14:46:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:35:55.449 14:46:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:35:55.449 14:46:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:55.449 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:35:55.449 { 00:35:55.449 "uuid": "97f5c7a7-c0fe-4397-b37b-516cadb3e20c", 00:35:55.449 "name": "lvs_0", 00:35:55.449 "base_bdev": "Nvme0n1", 00:35:55.449 "total_data_clusters": 1787, 00:35:55.449 "free_clusters": 1787, 00:35:55.449 "block_size": 512, 00:35:55.449 "cluster_size": 1073741824 00:35:55.449 } 00:35:55.449 ]' 00:35:55.449 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="97f5c7a7-c0fe-4397-b37b-516cadb3e20c") .free_clusters' 00:35:55.709 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1787 00:35:55.709 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="97f5c7a7-c0fe-4397-b37b-516cadb3e20c") .cluster_size' 00:35:55.709 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:35:55.709 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1829888 00:35:55.709 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1829888 00:35:55.709 1829888 00:35:55.709 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:35:55.709 fcedb50f-27de-46d3-bcc3-5544c3f63d18 00:35:55.709 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:35:55.970 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:35:56.231 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:56.231 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:56.231 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:56.231 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:56.231 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:56.231 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:56.231 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:56.231 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:35:56.231 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:56.231 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:56.491 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:35:56.491 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:35:56.491 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:56.491 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:35:56.491 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:35:56.491 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:35:56.491 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:35:56.491 14:46:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:35:56.751 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:35:56.751 fio-3.35 00:35:56.751 Starting 1 thread 00:35:59.309 00:35:59.309 test: (groupid=0, jobs=1): err= 0: pid=3219141: Mon Oct 7 14:46:22 2024 00:35:59.309 read: IOPS=9022, BW=35.2MiB/s (37.0MB/s)(70.7MiB/2006msec) 00:35:59.309 slat (usec): min=2, max=124, avg= 2.33, stdev= 1.26 00:35:59.309 clat (usec): min=2939, max=12628, avg=7819.41, stdev=599.24 00:35:59.309 lat (usec): min=2959, max=12631, avg=7821.74, stdev=599.18 00:35:59.309 clat percentiles (usec): 00:35:59.309 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 7373], 00:35:59.309 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:35:59.309 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8717], 00:35:59.309 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[11207], 99.95th=[11863], 00:35:59.309 | 99.99th=[12125] 00:35:59.310 bw ( KiB/s): min=34746, max=37056, per=99.85%, avg=36036.50, stdev=956.62, samples=4 00:35:59.310 iops : min= 8686, max= 9264, avg=9009.00, stdev=239.32, samples=4 00:35:59.310 write: IOPS=9041, BW=35.3MiB/s (37.0MB/s)(70.8MiB/2006msec); 0 zone resets 00:35:59.310 slat (nsec): min=2230, max=103868, avg=2414.50, stdev=841.63 00:35:59.310 clat (usec): min=1484, max=11433, avg=6253.31, stdev=515.79 00:35:59.310 lat (usec): min=1494, max=11435, avg=6255.72, stdev=515.75 00:35:59.310 clat percentiles (usec): 00:35:59.310 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:35:59.310 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6390], 00:35:59.310 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 7046], 00:35:59.310 | 99.00th=[ 7373], 99.50th=[ 7504], 99.90th=[ 9765], 99.95th=[10552], 00:35:59.310 | 99.99th=[11338] 00:35:59.310 bw ( KiB/s): min=35656, max=36488, per=99.95%, avg=36148.00, stdev=366.05, samples=4 00:35:59.310 iops : min= 8914, max= 9122, avg=9037.00, stdev=91.51, samples=4 00:35:59.310 lat (msec) : 2=0.01%, 4=0.10%, 10=99.74%, 20=0.16% 00:35:59.310 cpu : usr=75.41%, sys=23.39%, ctx=50, majf=0, minf=1535 00:35:59.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:35:59.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:59.310 issued rwts: total=18100,18137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:59.310 00:35:59.310 Run status group 0 (all jobs): 00:35:59.310 READ: bw=35.2MiB/s (37.0MB/s), 35.2MiB/s-35.2MiB/s (37.0MB/s-37.0MB/s), io=70.7MiB (74.1MB), run=2006-2006msec 00:35:59.310 WRITE: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.8MiB (74.3MB), run=2006-2006msec 00:35:59.571 ----------------------------------------------------- 00:35:59.571 Suppressions used: 00:35:59.571 count bytes template 00:35:59.571 1 58 /usr/src/fio/parse.c 00:35:59.571 1 8 libtcmalloc_minimal.so 00:35:59.571 ----------------------------------------------------- 00:35:59.571 00:35:59.571 14:46:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:59.571 14:46:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:36:00.511 14:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=77aa1d59-e29c-4a77-977d-a905aafaa1ce 00:36:00.511 14:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 77aa1d59-e29c-4a77-977d-a905aafaa1ce 00:36:00.511 14:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=77aa1d59-e29c-4a77-977d-a905aafaa1ce 00:36:00.511 14:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:36:00.511 14:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:36:00.511 14:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:36:00.511 14:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:00.771 14:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:36:00.771 { 00:36:00.771 "uuid": "97f5c7a7-c0fe-4397-b37b-516cadb3e20c", 00:36:00.771 "name": "lvs_0", 00:36:00.771 "base_bdev": "Nvme0n1", 00:36:00.771 "total_data_clusters": 1787, 00:36:00.771 "free_clusters": 0, 00:36:00.771 "block_size": 512, 00:36:00.771 "cluster_size": 1073741824 00:36:00.771 }, 00:36:00.771 { 00:36:00.771 "uuid": "77aa1d59-e29c-4a77-977d-a905aafaa1ce", 00:36:00.771 "name": "lvs_n_0", 00:36:00.771 "base_bdev": "fcedb50f-27de-46d3-bcc3-5544c3f63d18", 00:36:00.771 "total_data_clusters": 457025, 00:36:00.771 "free_clusters": 457025, 00:36:00.771 "block_size": 512, 00:36:00.771 "cluster_size": 4194304 00:36:00.771 } 00:36:00.771 ]' 00:36:00.771 14:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="77aa1d59-e29c-4a77-977d-a905aafaa1ce") .free_clusters' 00:36:00.771 14:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=457025 00:36:00.771 14:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="77aa1d59-e29c-4a77-977d-a905aafaa1ce") .cluster_size' 00:36:00.771 14:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:36:00.771 14:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=1828100 00:36:00.771 14:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 1828100 00:36:00.771 1828100 00:36:00.771 14:46:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:36:03.314 a0652e57-4427-4580-82c1-bafd19a25f16 00:36:03.314 14:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:36:03.314 14:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:36:03.314 14:46:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:36:03.575 14:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:36:03.575 14:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:36:03.575 14:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:03.575 14:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:03.575 14:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:03.575 14:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:03.575 14:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:36:03.575 14:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:03.575 14:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.575 14:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:36:03.575 14:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:36:03.575 14:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:03.575 14:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:03.575 14:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:03.575 14:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:36:03.575 14:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:36:03.575 14:46:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:36:04.142 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:36:04.142 fio-3.35 00:36:04.142 Starting 1 thread 00:36:06.685 00:36:06.685 test: (groupid=0, jobs=1): err= 0: pid=3220642: Mon Oct 7 14:46:29 2024 00:36:06.685 read: IOPS=8226, BW=32.1MiB/s (33.7MB/s)(64.5MiB/2006msec) 00:36:06.685 slat (usec): min=2, max=119, avg= 2.33, stdev= 1.28 00:36:06.685 clat (usec): min=4037, max=14386, avg=8593.84, stdev=667.47 00:36:06.685 lat (usec): min=4055, max=14388, avg=8596.17, stdev=667.40 00:36:06.685 clat percentiles (usec): 00:36:06.685 | 1.00th=[ 7046], 5.00th=[ 7504], 10.00th=[ 7767], 20.00th=[ 8094], 00:36:06.685 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8717], 00:36:06.685 | 70.00th=[ 8979], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9634], 00:36:06.685 | 99.00th=[10028], 99.50th=[10290], 99.90th=[12911], 99.95th=[13566], 00:36:06.685 | 99.99th=[14353] 00:36:06.685 bw ( KiB/s): min=31632, max=33496, per=99.85%, avg=32858.00, stdev=856.00, samples=4 00:36:06.685 iops : min= 7908, max= 8374, avg=8214.50, stdev=214.00, samples=4 00:36:06.685 write: IOPS=8228, BW=32.1MiB/s (33.7MB/s)(64.5MiB/2006msec); 0 zone resets 00:36:06.685 slat (nsec): min=2223, max=108583, avg=2431.62, stdev=947.46 00:36:06.685 clat (usec): min=1580, max=12792, avg=6847.08, stdev=586.30 00:36:06.685 lat (usec): min=1590, max=12794, avg=6849.51, stdev=586.25 00:36:06.685 clat percentiles (usec): 00:36:06.685 | 1.00th=[ 5538], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6390], 00:36:06.685 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:36:06.685 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 7504], 95.00th=[ 7701], 00:36:06.685 | 99.00th=[ 8094], 99.50th=[ 8291], 99.90th=[11338], 99.95th=[12125], 00:36:06.685 | 99.99th=[12780] 00:36:06.685 bw ( KiB/s): min=32512, max=33248, per=99.98%, avg=32908.00, stdev=313.91, samples=4 00:36:06.685 iops : min= 8128, max= 8312, avg=8227.00, stdev=78.48, samples=4 00:36:06.685 lat (msec) : 2=0.01%, 4=0.05%, 10=99.25%, 20=0.69% 00:36:06.685 cpu : usr=73.57%, sys=25.19%, ctx=54, majf=0, minf=1534 00:36:06.685 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:36:06.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:36:06.685 issued rwts: total=16503,16507,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.685 latency : target=0, window=0, percentile=100.00%, depth=128 00:36:06.685 00:36:06.685 Run status group 0 (all jobs): 00:36:06.685 READ: bw=32.1MiB/s (33.7MB/s), 32.1MiB/s-32.1MiB/s (33.7MB/s-33.7MB/s), io=64.5MiB (67.6MB), run=2006-2006msec 00:36:06.685 WRITE: bw=32.1MiB/s (33.7MB/s), 32.1MiB/s-32.1MiB/s (33.7MB/s-33.7MB/s), io=64.5MiB (67.6MB), run=2006-2006msec 00:36:06.685 ----------------------------------------------------- 00:36:06.685 Suppressions used: 00:36:06.685 count bytes template 00:36:06.685 1 58 /usr/src/fio/parse.c 00:36:06.685 1 8 libtcmalloc_minimal.so 00:36:06.685 ----------------------------------------------------- 00:36:06.685 00:36:06.685 14:46:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:36:06.945 14:46:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:36:06.945 14:46:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:36:10.242 14:46:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:36:10.520 14:46:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:36:11.091 14:46:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:36:11.091 14:46:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:13.648 rmmod nvme_tcp 00:36:13.648 rmmod nvme_fabrics 00:36:13.648 rmmod nvme_keyring 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 3216590 ']' 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 3216590 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3216590 ']' 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3216590 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3216590 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3216590' 00:36:13.648 killing process with pid 3216590 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3216590 00:36:13.648 14:46:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3216590 00:36:14.588 14:46:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:14.588 14:46:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:14.588 14:46:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:14.588 14:46:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:36:14.588 14:46:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:36:14.588 14:46:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:14.588 14:46:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:36:14.588 14:46:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:14.588 14:46:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:14.588 14:46:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:14.588 14:46:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:14.588 14:46:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.499 14:46:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:16.499 00:36:16.499 real 0m38.365s 00:36:16.499 user 2m58.985s 00:36:16.499 sys 0m13.068s 00:36:16.499 14:46:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:16.499 14:46:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.499 ************************************ 00:36:16.499 END TEST nvmf_fio_host 00:36:16.499 ************************************ 00:36:16.499 14:46:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:36:16.499 14:46:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:16.499 14:46:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:16.499 14:46:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.499 ************************************ 00:36:16.499 START TEST nvmf_failover 00:36:16.499 ************************************ 00:36:16.499 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:36:16.499 * Looking for test storage... 00:36:16.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.761 --rc genhtml_branch_coverage=1 00:36:16.761 --rc genhtml_function_coverage=1 00:36:16.761 --rc genhtml_legend=1 00:36:16.761 --rc geninfo_all_blocks=1 00:36:16.761 --rc geninfo_unexecuted_blocks=1 00:36:16.761 00:36:16.761 ' 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.761 --rc genhtml_branch_coverage=1 00:36:16.761 --rc genhtml_function_coverage=1 00:36:16.761 --rc genhtml_legend=1 00:36:16.761 --rc geninfo_all_blocks=1 00:36:16.761 --rc geninfo_unexecuted_blocks=1 00:36:16.761 00:36:16.761 ' 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.761 --rc genhtml_branch_coverage=1 00:36:16.761 --rc genhtml_function_coverage=1 00:36:16.761 --rc genhtml_legend=1 00:36:16.761 --rc geninfo_all_blocks=1 00:36:16.761 --rc geninfo_unexecuted_blocks=1 00:36:16.761 00:36:16.761 ' 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.761 --rc genhtml_branch_coverage=1 00:36:16.761 --rc genhtml_function_coverage=1 00:36:16.761 --rc genhtml_legend=1 00:36:16.761 --rc geninfo_all_blocks=1 00:36:16.761 --rc geninfo_unexecuted_blocks=1 00:36:16.761 00:36:16.761 ' 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.761 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:16.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:36:16.762 14:46:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:36:24.899 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:24.899 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:36:24.899 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:24.899 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:24.899 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:24.899 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:24.899 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:24.899 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:36:24.899 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:24.899 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:36:24.899 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:36:24.899 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:36:24.899 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:24.900 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:24.900 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:24.900 Found net devices under 0000:31:00.0: cvl_0_0 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:24.900 Found net devices under 0000:31:00.1: cvl_0_1 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:24.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:24.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:36:24.900 00:36:24.900 --- 10.0.0.2 ping statistics --- 00:36:24.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:24.900 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:24.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:24.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:36:24.900 00:36:24.900 --- 10.0.0.1 ping statistics --- 00:36:24.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:24.900 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=3226709 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 3226709 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3226709 ']' 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:24.900 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:24.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:24.901 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:24.901 14:46:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:36:24.901 [2024-10-07 14:46:47.901390] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:36:24.901 [2024-10-07 14:46:47.901518] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:24.901 [2024-10-07 14:46:48.062577] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:24.901 [2024-10-07 14:46:48.290049] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:24.901 [2024-10-07 14:46:48.290128] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:24.901 [2024-10-07 14:46:48.290141] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:24.901 [2024-10-07 14:46:48.290155] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:24.901 [2024-10-07 14:46:48.290165] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:24.901 [2024-10-07 14:46:48.292404] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:36:24.901 [2024-10-07 14:46:48.292532] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:24.901 [2024-10-07 14:46:48.292556] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:36:25.161 14:46:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:25.161 14:46:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:36:25.161 14:46:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:25.161 14:46:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:25.161 14:46:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:36:25.161 14:46:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:25.161 14:46:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:25.161 [2024-10-07 14:46:48.850266] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:25.420 14:46:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:36:25.420 Malloc0 00:36:25.420 14:46:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:25.680 14:46:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:25.940 14:46:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:25.940 [2024-10-07 14:46:49.608027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:25.940 14:46:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:26.201 [2024-10-07 14:46:49.776458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:26.201 14:46:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:36:26.461 [2024-10-07 14:46:49.944986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:36:26.461 14:46:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3227212 00:36:26.461 14:46:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:36:26.461 14:46:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:26.461 14:46:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3227212 /var/tmp/bdevperf.sock 00:36:26.461 14:46:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3227212 ']' 00:36:26.461 14:46:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:26.461 14:46:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:26.461 14:46:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:26.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:26.461 14:46:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:26.461 14:46:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:36:27.399 14:46:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:27.399 14:46:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:36:27.399 14:46:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:27.659 NVMe0n1 00:36:27.659 14:46:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:27.919 00:36:27.919 14:46:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:27.919 14:46:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3227465 00:36:27.919 14:46:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:36:29.303 14:46:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:29.303 [2024-10-07 14:46:52.730203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.303 [2024-10-07 14:46:52.730586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.304 [2024-10-07 14:46:52.730592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.304 [2024-10-07 14:46:52.730599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.304 [2024-10-07 14:46:52.730605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.304 [2024-10-07 14:46:52.730611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.304 [2024-10-07 14:46:52.730617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.304 [2024-10-07 14:46:52.730623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.304 [2024-10-07 14:46:52.730630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.304 [2024-10-07 14:46:52.730636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.304 [2024-10-07 14:46:52.730642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.304 [2024-10-07 14:46:52.730648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.304 [2024-10-07 14:46:52.730655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(6) to be set 00:36:29.304 14:46:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:36:32.770 14:46:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:32.770 00:36:32.770 14:46:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:32.770 [2024-10-07 14:46:56.206511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 [2024-10-07 14:46:56.206700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:36:32.770 14:46:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:36:36.108 14:46:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:36.108 [2024-10-07 14:46:59.391354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:36.108 14:46:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:36:37.050 14:47:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:36:37.050 [2024-10-07 14:47:00.584309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 [2024-10-07 14:47:00.584766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:36:37.050 14:47:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3227465 00:36:43.634 { 00:36:43.634 "results": [ 00:36:43.634 { 00:36:43.634 "job": "NVMe0n1", 00:36:43.634 "core_mask": "0x1", 00:36:43.634 "workload": "verify", 00:36:43.634 "status": "finished", 00:36:43.634 "verify_range": { 00:36:43.634 "start": 0, 00:36:43.634 "length": 16384 00:36:43.634 }, 00:36:43.634 "queue_depth": 128, 00:36:43.634 "io_size": 4096, 00:36:43.634 "runtime": 15.008901, 00:36:43.634 "iops": 10188.620739120073, 00:36:43.634 "mibps": 39.799299762187786, 00:36:43.634 "io_failed": 3909, 00:36:43.634 "io_timeout": 0, 00:36:43.634 "avg_latency_us": 12220.581198290281, 00:36:43.634 "min_latency_us": 580.2666666666667, 00:36:43.634 "max_latency_us": 15400.96 00:36:43.634 } 00:36:43.634 ], 00:36:43.634 "core_count": 1 00:36:43.634 } 00:36:43.634 14:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3227212 00:36:43.634 14:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3227212 ']' 00:36:43.634 14:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3227212 00:36:43.634 14:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:36:43.634 14:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:43.634 14:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3227212 00:36:43.634 14:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:43.634 14:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:43.634 14:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3227212' 00:36:43.634 killing process with pid 3227212 00:36:43.634 14:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3227212 00:36:43.634 14:47:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3227212 00:36:43.902 14:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:43.902 [2024-10-07 14:46:50.057048] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:36:43.903 [2024-10-07 14:46:50.057162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3227212 ] 00:36:43.903 [2024-10-07 14:46:50.178833] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.903 [2024-10-07 14:46:50.362079] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:43.903 Running I/O for 15 seconds... 00:36:43.903 10083.00 IOPS, 39.39 MiB/s [2024-10-07T12:47:07.612Z] [2024-10-07 14:46:52.731318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:43.903 [2024-10-07 14:46:52.731363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:43.903 [2024-10-07 14:46:52.731391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:43.903 [2024-10-07 14:46:52.731413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:43.903 [2024-10-07 14:46:52.731436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039dd00 is same with the state(6) to be set 00:36:43.903 [2024-10-07 14:46:52.731525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.731975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.731985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.732007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.732019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.732031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.732041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.732055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.732066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.732079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.732089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.732101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.732112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.732124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.732135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.732147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.732157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.732170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.732180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.732192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.732203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.732217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.732227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.732240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.732251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.732263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.732273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.732286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.732298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.732311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.732321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.732334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.903 [2024-10-07 14:46:52.732344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.732358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.903 [2024-10-07 14:46:52.732368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.903 [2024-10-07 14:46:52.732381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.732978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.732991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.733006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.733019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.733029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.733041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.733052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.733064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.733075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.733087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.733098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.733111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.733121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.733133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.904 [2024-10-07 14:46:52.733143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.904 [2024-10-07 14:46:52.733156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.905 [2024-10-07 14:46:52.733376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.733981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.733994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.734008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.734021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.734032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.734044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.734054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.734067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.734078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.734091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.905 [2024-10-07 14:46:52.734101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.905 [2024-10-07 14:46:52.734115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:52.734125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:52.734148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:52.734171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.906 [2024-10-07 14:46:52.734196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.906 [2024-10-07 14:46:52.734221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.906 [2024-10-07 14:46:52.734245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.906 [2024-10-07 14:46:52.734269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.906 [2024-10-07 14:46:52.734295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.906 [2024-10-07 14:46:52.734322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.906 [2024-10-07 14:46:52.734347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.906 [2024-10-07 14:46:52.734371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.906 [2024-10-07 14:46:52.734396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.906 [2024-10-07 14:46:52.734420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.906 [2024-10-07 14:46:52.734442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.906 [2024-10-07 14:46:52.734466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.906 [2024-10-07 14:46:52.734490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.906 [2024-10-07 14:46:52.734513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.906 [2024-10-07 14:46:52.734537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.906 [2024-10-07 14:46:52.734574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.906 [2024-10-07 14:46:52.734587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88064 len:8 PRP1 0x0 PRP2 0x0 00:36:43.906 [2024-10-07 14:46:52.734599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:52.734804] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500039f380 was disconnected and freed. reset controller. 00:36:43.906 [2024-10-07 14:46:52.734820] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:36:43.906 [2024-10-07 14:46:52.734835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:43.906 [2024-10-07 14:46:52.738610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:43.906 [2024-10-07 14:46:52.738653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039dd00 (9): Bad file descriptor 00:36:43.906 [2024-10-07 14:46:52.779302] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:43.906 9985.00 IOPS, 39.00 MiB/s [2024-10-07T12:47:07.615Z] 9981.67 IOPS, 38.99 MiB/s [2024-10-07T12:47:07.615Z] 9990.25 IOPS, 39.02 MiB/s [2024-10-07T12:47:07.615Z] [2024-10-07 14:46:56.207393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.906 [2024-10-07 14:46:56.207440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:56.207473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:56.207486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:56.207500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:56.207510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:56.207524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:56.207534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:56.207547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:56.207557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:56.207570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:56.207580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:56.207593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:56.207603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:56.207615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:56.207626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:56.207639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:56.207649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:56.207662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:56.207672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:56.207684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:56.207694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:56.207707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:56.207717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:56.207730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:56.207740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:56.207752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:56.207762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:56.207777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:56.207787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:56.207800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:56.207809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:56.207823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:56.207833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.906 [2024-10-07 14:46:56.207846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.906 [2024-10-07 14:46:56.207856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.207868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.207878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.207891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.207901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.207913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.207924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.207936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.207946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.207959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.207969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.207982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.207992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:112400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.907 [2024-10-07 14:46:56.208712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.907 [2024-10-07 14:46:56.208722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.208735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.208746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.208758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.208768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.208780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.208792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.208804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.208814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.208827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.208836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.208848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.208860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.208872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.208882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.208894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.208905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.208917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.208927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.208939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.208949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.208963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.208974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.208987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.208997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.908 [2024-10-07 14:46:56.209683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.908 [2024-10-07 14:46:56.209693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.209728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.209746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112880 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.209758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.209774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.209785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.209795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112888 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.209806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.209817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.209825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.209835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112896 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.209845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.209855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.209863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.209872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112904 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.209885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.209895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.209904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.209913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112912 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.209923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.209933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.209941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.209950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112920 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.209960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.209970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.209978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.209988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112928 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.209998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.210014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.210022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.210031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112936 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.210041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.210051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.210058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.210067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112944 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.210077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.210087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.210096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.210105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112952 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.210115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.210126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.210134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.210142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112960 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.210153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.210163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.210170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.210184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112968 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.210194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.210205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.210212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.210221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112976 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.210231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.210241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.210249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.210258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112984 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.210269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.210278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.210286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.210295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112992 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.210305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.210315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.210323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.210331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113000 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.210341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.210352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.210360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.210370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113008 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.210380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.210390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.210397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.210407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113016 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.210417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.210427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.210435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.210444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113024 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.210454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.210464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.210474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.210483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113032 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.210492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.210502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.210511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.210519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113040 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.210530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.210540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.210547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.210556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113048 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.210567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.210577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.210584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.210593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113056 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.210602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.210613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.210621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.210630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113064 len:8 PRP1 0x0 PRP2 0x0 00:36:43.909 [2024-10-07 14:46:56.210640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.909 [2024-10-07 14:46:56.210650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.909 [2024-10-07 14:46:56.210659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.909 [2024-10-07 14:46:56.210668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113072 len:8 PRP1 0x0 PRP2 0x0 00:36:43.910 [2024-10-07 14:46:56.210678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:46:56.210688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.910 [2024-10-07 14:46:56.210696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.910 [2024-10-07 14:46:56.210704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113080 len:8 PRP1 0x0 PRP2 0x0 00:36:43.910 [2024-10-07 14:46:56.210715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:46:56.210725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.910 [2024-10-07 14:46:56.210732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.910 [2024-10-07 14:46:56.210741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113088 len:8 PRP1 0x0 PRP2 0x0 00:36:43.910 [2024-10-07 14:46:56.210751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:46:56.210763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.910 [2024-10-07 14:46:56.210771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.910 [2024-10-07 14:46:56.210779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113096 len:8 PRP1 0x0 PRP2 0x0 00:36:43.910 [2024-10-07 14:46:56.210790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:46:56.210800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.910 [2024-10-07 14:46:56.210807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.910 [2024-10-07 14:46:56.210817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113104 len:8 PRP1 0x0 PRP2 0x0 00:36:43.910 [2024-10-07 14:46:56.210827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:46:56.210837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.910 [2024-10-07 14:46:56.210845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.910 [2024-10-07 14:46:56.210853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113112 len:8 PRP1 0x0 PRP2 0x0 00:36:43.910 [2024-10-07 14:46:56.210864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:46:56.210874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.910 [2024-10-07 14:46:56.210882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.910 [2024-10-07 14:46:56.210890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113120 len:8 PRP1 0x0 PRP2 0x0 00:36:43.910 [2024-10-07 14:46:56.210900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:46:56.210939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.910 [2024-10-07 14:46:56.210947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.910 [2024-10-07 14:46:56.210956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113128 len:8 PRP1 0x0 PRP2 0x0 00:36:43.910 [2024-10-07 14:46:56.210966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:46:56.211179] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500039f880 was disconnected and freed. reset controller. 00:36:43.910 [2024-10-07 14:46:56.211196] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:36:43.910 [2024-10-07 14:46:56.211230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:43.910 [2024-10-07 14:46:56.211244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:46:56.211257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:43.910 [2024-10-07 14:46:56.211267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:46:56.211280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:43.910 [2024-10-07 14:46:56.211290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:46:56.211301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:43.910 [2024-10-07 14:46:56.211314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:46:56.211325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:43.910 [2024-10-07 14:46:56.211366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039dd00 (9): Bad file descriptor 00:36:43.910 [2024-10-07 14:46:56.215088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:43.910 [2024-10-07 14:46:56.262611] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:43.910 10005.80 IOPS, 39.09 MiB/s [2024-10-07T12:47:07.619Z] 10106.00 IOPS, 39.48 MiB/s [2024-10-07T12:47:07.619Z] 10171.71 IOPS, 39.73 MiB/s [2024-10-07T12:47:07.619Z] 10201.62 IOPS, 39.85 MiB/s [2024-10-07T12:47:07.619Z] [2024-10-07 14:47:00.585551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.585595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.585621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.585634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.585648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.585659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.585672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.585683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.585695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.585706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.585719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.585730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.585743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.585753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.585766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.585777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.585789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.585800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.585813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.585824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.585840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.585851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.585864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.585875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.585887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.585897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.585910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.585921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.585933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.585944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.585957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.585967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.585979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.585990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.586008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.586019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.586031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.586042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.586055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.586066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.586079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.586090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.910 [2024-10-07 14:47:00.586102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.910 [2024-10-07 14:47:00.586112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.586976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.586987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.911 [2024-10-07 14:47:00.587003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.911 [2024-10-07 14:47:00.587014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.912 [2024-10-07 14:47:00.587038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.912 [2024-10-07 14:47:00.587061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.912 [2024-10-07 14:47:00.587083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:43.912 [2024-10-07 14:47:00.587107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.912 [2024-10-07 14:47:00.587957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.912 [2024-10-07 14:47:00.587967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.587979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.587990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:43.913 [2024-10-07 14:47:00.588593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:43.913 [2024-10-07 14:47:00.588642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:43.913 [2024-10-07 14:47:00.588654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94688 len:8 PRP1 0x0 PRP2 0x0 00:36:43.913 [2024-10-07 14:47:00.588666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588850] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150003a0780 was disconnected and freed. reset controller. 00:36:43.913 [2024-10-07 14:47:00.588866] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:36:43.913 [2024-10-07 14:47:00.588897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:43.913 [2024-10-07 14:47:00.588909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:43.913 [2024-10-07 14:47:00.588932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:43.913 [2024-10-07 14:47:00.588954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:43.913 [2024-10-07 14:47:00.588976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:43.913 [2024-10-07 14:47:00.588987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:43.913 [2024-10-07 14:47:00.589039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039dd00 (9): Bad file descriptor 00:36:43.913 [2024-10-07 14:47:00.592780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:43.913 [2024-10-07 14:47:00.633980] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:43.913 10166.56 IOPS, 39.71 MiB/s [2024-10-07T12:47:07.622Z] 10178.50 IOPS, 39.76 MiB/s [2024-10-07T12:47:07.622Z] 10193.91 IOPS, 39.82 MiB/s [2024-10-07T12:47:07.622Z] 10186.92 IOPS, 39.79 MiB/s [2024-10-07T12:47:07.622Z] 10188.23 IOPS, 39.80 MiB/s [2024-10-07T12:47:07.622Z] 10193.64 IOPS, 39.82 MiB/s 00:36:43.913 Latency(us) 00:36:43.913 [2024-10-07T12:47:07.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:43.913 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:43.913 Verification LBA range: start 0x0 length 0x4000 00:36:43.913 NVMe0n1 : 15.01 10188.62 39.80 260.45 0.00 12220.58 580.27 15400.96 00:36:43.913 [2024-10-07T12:47:07.622Z] =================================================================================================================== 00:36:43.913 [2024-10-07T12:47:07.622Z] Total : 10188.62 39.80 260.45 0.00 12220.58 580.27 15400.96 00:36:43.913 Received shutdown signal, test time was about 15.000000 seconds 00:36:43.913 00:36:43.913 Latency(us) 00:36:43.913 [2024-10-07T12:47:07.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:43.913 [2024-10-07T12:47:07.622Z] =================================================================================================================== 00:36:43.913 [2024-10-07T12:47:07.622Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:43.913 14:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:36:43.913 14:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:36:43.913 14:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:36:43.913 14:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3230476 00:36:43.913 14:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:36:43.914 14:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3230476 /var/tmp/bdevperf.sock 00:36:43.914 14:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3230476 ']' 00:36:43.914 14:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:43.914 14:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:43.914 14:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:43.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:43.914 14:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:43.914 14:47:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:36:44.856 14:47:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:44.856 14:47:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:36:44.856 14:47:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:44.856 [2024-10-07 14:47:08.512450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:44.856 14:47:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:36:45.116 [2024-10-07 14:47:08.684866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:36:45.116 14:47:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:45.686 NVMe0n1 00:36:45.686 14:47:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:45.686 00:36:45.686 14:47:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:45.947 00:36:45.947 14:47:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:45.947 14:47:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:36:46.207 14:47:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:46.467 14:47:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:36:49.761 14:47:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:49.761 14:47:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:36:49.761 14:47:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3231644 00:36:49.761 14:47:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:49.761 14:47:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3231644 00:36:50.704 { 00:36:50.704 "results": [ 00:36:50.704 { 00:36:50.704 "job": "NVMe0n1", 00:36:50.704 "core_mask": "0x1", 00:36:50.704 "workload": "verify", 00:36:50.704 "status": "finished", 00:36:50.704 "verify_range": { 00:36:50.704 "start": 0, 00:36:50.704 "length": 16384 00:36:50.704 }, 00:36:50.704 "queue_depth": 128, 00:36:50.704 "io_size": 4096, 00:36:50.704 "runtime": 1.015028, 00:36:50.704 "iops": 9872.63405541522, 00:36:50.704 "mibps": 38.5649767789657, 00:36:50.704 "io_failed": 0, 00:36:50.704 "io_timeout": 0, 00:36:50.704 "avg_latency_us": 12903.01515617204, 00:36:50.704 "min_latency_us": 2635.0933333333332, 00:36:50.704 "max_latency_us": 15291.733333333334 00:36:50.704 } 00:36:50.704 ], 00:36:50.704 "core_count": 1 00:36:50.704 } 00:36:50.704 14:47:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:50.704 [2024-10-07 14:47:07.584593] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:36:50.704 [2024-10-07 14:47:07.584705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3230476 ] 00:36:50.704 [2024-10-07 14:47:07.704208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:50.704 [2024-10-07 14:47:07.883851] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:50.704 [2024-10-07 14:47:09.981855] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:36:50.704 [2024-10-07 14:47:09.981925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:50.704 [2024-10-07 14:47:09.981943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:50.704 [2024-10-07 14:47:09.981958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:50.704 [2024-10-07 14:47:09.981970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:50.704 [2024-10-07 14:47:09.981982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:50.704 [2024-10-07 14:47:09.981992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:50.704 [2024-10-07 14:47:09.982010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:50.704 [2024-10-07 14:47:09.982021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:50.704 [2024-10-07 14:47:09.982037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:50.704 [2024-10-07 14:47:09.982089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:50.704 [2024-10-07 14:47:09.982121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039dd00 (9): Bad file descriptor 00:36:50.704 [2024-10-07 14:47:09.993345] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:50.704 Running I/O for 1 seconds... 00:36:50.704 9819.00 IOPS, 38.36 MiB/s 00:36:50.704 Latency(us) 00:36:50.704 [2024-10-07T12:47:14.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:50.704 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:50.704 Verification LBA range: start 0x0 length 0x4000 00:36:50.704 NVMe0n1 : 1.02 9872.63 38.56 0.00 0.00 12903.02 2635.09 15291.73 00:36:50.704 [2024-10-07T12:47:14.413Z] =================================================================================================================== 00:36:50.704 [2024-10-07T12:47:14.413Z] Total : 9872.63 38.56 0.00 0.00 12903.02 2635.09 15291.73 00:36:50.704 14:47:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:50.704 14:47:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:36:50.965 14:47:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:51.226 14:47:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:51.226 14:47:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:36:51.226 14:47:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:51.487 14:47:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:36:54.788 14:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:36:54.788 14:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:36:54.788 14:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3230476 00:36:54.788 14:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3230476 ']' 00:36:54.788 14:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3230476 00:36:54.788 14:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:36:54.788 14:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:54.788 14:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3230476 00:36:54.788 14:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:54.788 14:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:54.788 14:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3230476' 00:36:54.788 killing process with pid 3230476 00:36:54.788 14:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3230476 00:36:54.788 14:47:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3230476 00:36:55.361 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:36:55.361 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:55.622 rmmod nvme_tcp 00:36:55.622 rmmod nvme_fabrics 00:36:55.622 rmmod nvme_keyring 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 3226709 ']' 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 3226709 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3226709 ']' 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3226709 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:55.622 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3226709 00:36:55.883 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:55.883 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:55.883 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3226709' 00:36:55.883 killing process with pid 3226709 00:36:55.883 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3226709 00:36:55.883 14:47:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3226709 00:36:56.455 14:47:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:56.455 14:47:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:56.455 14:47:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:56.455 14:47:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:36:56.455 14:47:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:36:56.455 14:47:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:36:56.455 14:47:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:56.455 14:47:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:56.455 14:47:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:56.455 14:47:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:56.455 14:47:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:56.455 14:47:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:59.002 00:36:59.002 real 0m42.093s 00:36:59.002 user 2m8.963s 00:36:59.002 sys 0m9.001s 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:36:59.002 ************************************ 00:36:59.002 END TEST nvmf_failover 00:36:59.002 ************************************ 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.002 ************************************ 00:36:59.002 START TEST nvmf_host_discovery 00:36:59.002 ************************************ 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:36:59.002 * Looking for test storage... 00:36:59.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:59.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.002 --rc genhtml_branch_coverage=1 00:36:59.002 --rc genhtml_function_coverage=1 00:36:59.002 --rc genhtml_legend=1 00:36:59.002 --rc geninfo_all_blocks=1 00:36:59.002 --rc geninfo_unexecuted_blocks=1 00:36:59.002 00:36:59.002 ' 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:59.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.002 --rc genhtml_branch_coverage=1 00:36:59.002 --rc genhtml_function_coverage=1 00:36:59.002 --rc genhtml_legend=1 00:36:59.002 --rc geninfo_all_blocks=1 00:36:59.002 --rc geninfo_unexecuted_blocks=1 00:36:59.002 00:36:59.002 ' 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:59.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.002 --rc genhtml_branch_coverage=1 00:36:59.002 --rc genhtml_function_coverage=1 00:36:59.002 --rc genhtml_legend=1 00:36:59.002 --rc geninfo_all_blocks=1 00:36:59.002 --rc geninfo_unexecuted_blocks=1 00:36:59.002 00:36:59.002 ' 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:59.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:59.002 --rc genhtml_branch_coverage=1 00:36:59.002 --rc genhtml_function_coverage=1 00:36:59.002 --rc genhtml_legend=1 00:36:59.002 --rc geninfo_all_blocks=1 00:36:59.002 --rc geninfo_unexecuted_blocks=1 00:36:59.002 00:36:59.002 ' 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:59.002 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:59.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:36:59.003 14:47:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:07.146 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:07.146 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:07.146 Found net devices under 0000:31:00.0: cvl_0_0 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:07.146 Found net devices under 0000:31:00.1: cvl_0_1 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:07.146 14:47:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:07.146 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:07.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:07.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:37:07.146 00:37:07.146 --- 10.0.0.2 ping statistics --- 00:37:07.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:07.146 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:37:07.146 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:07.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:07.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:37:07.146 00:37:07.146 --- 10.0.0.1 ping statistics --- 00:37:07.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:07.146 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:37:07.146 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:07.146 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:37:07.146 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:07.146 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:07.146 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:07.146 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:07.146 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:07.146 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:07.146 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:07.146 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:37:07.147 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:07.147 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:07.147 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:07.147 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=3237201 00:37:07.147 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 3237201 00:37:07.147 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:37:07.147 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3237201 ']' 00:37:07.147 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:07.147 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:07.147 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:07.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:07.147 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:07.147 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:07.147 [2024-10-07 14:47:30.168207] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:37:07.147 [2024-10-07 14:47:30.168309] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:07.147 [2024-10-07 14:47:30.310413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.147 [2024-10-07 14:47:30.518383] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:07.147 [2024-10-07 14:47:30.518465] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:07.147 [2024-10-07 14:47:30.518478] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:07.147 [2024-10-07 14:47:30.518493] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:07.147 [2024-10-07 14:47:30.518504] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:07.147 [2024-10-07 14:47:30.519986] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:07.408 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:07.408 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:37:07.408 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:07.408 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:07.408 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:07.408 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:07.408 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:07.408 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.408 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:07.408 [2024-10-07 14:47:30.980574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:07.408 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.408 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:37:07.408 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.408 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:07.408 [2024-10-07 14:47:30.988772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:37:07.408 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.408 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:37:07.408 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.408 14:47:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:07.408 null0 00:37:07.408 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.408 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:37:07.408 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.408 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:07.408 null1 00:37:07.408 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.408 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:37:07.408 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.408 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:07.408 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.408 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3237320 00:37:07.408 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3237320 /tmp/host.sock 00:37:07.408 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:37:07.408 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3237320 ']' 00:37:07.408 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:37:07.408 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:07.408 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:37:07.408 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:37:07.408 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:07.408 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:07.408 [2024-10-07 14:47:31.100901] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:37:07.408 [2024-10-07 14:47:31.101009] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3237320 ] 00:37:07.669 [2024-10-07 14:47:31.218538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.929 [2024-10-07 14:47:31.400093] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:08.189 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:08.189 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:37:08.189 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:08.189 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:37:08.189 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.189 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.190 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.190 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:37:08.190 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.190 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.190 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.190 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:37:08.190 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:37:08.190 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:08.190 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:08.190 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:08.190 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.190 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:08.190 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.450 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.450 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:37:08.450 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:37:08.450 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:08.450 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:08.450 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.450 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:08.450 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.450 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:08.450 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.450 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:37:08.450 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:37:08.450 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.450 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.450 14:47:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.450 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.712 [2024-10-07 14:47:32.220046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:08.712 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.973 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:37:08.973 14:47:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:37:09.231 [2024-10-07 14:47:32.930241] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:37:09.231 [2024-10-07 14:47:32.930279] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:37:09.231 [2024-10-07 14:47:32.930310] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:09.490 [2024-10-07 14:47:33.018577] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:37:09.750 [2024-10-07 14:47:33.206465] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:37:09.750 [2024-10-07 14:47:33.206498] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:37:09.750 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:09.750 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:37:09.750 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:37:09.750 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:09.750 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:09.750 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.750 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:09.750 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:09.750 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:09.750 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.010 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:10.011 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:10.271 [2024-10-07 14:47:33.732571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:37:10.271 [2024-10-07 14:47:33.733167] bdev_nvme.c:7146:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:37:10.271 [2024-10-07 14:47:33.733205] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.271 [2024-10-07 14:47:33.820642] bdev_nvme.c:7088:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:37:10.271 14:47:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:37:10.271 [2024-10-07 14:47:33.928091] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:37:10.271 [2024-10-07 14:47:33.928125] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:37:10.271 [2024-10-07 14:47:33.928136] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:37:11.210 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:11.210 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:37:11.210 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:37:11.210 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:37:11.210 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.210 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:11.210 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:37:11.210 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:37:11.210 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:37:11.210 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.471 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:37:11.471 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:11.472 [2024-10-07 14:47:34.988154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:11.472 [2024-10-07 14:47:34.988193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.472 [2024-10-07 14:47:34.988209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:11.472 [2024-10-07 14:47:34.988220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.472 [2024-10-07 14:47:34.988232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:11.472 [2024-10-07 14:47:34.988248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.472 [2024-10-07 14:47:34.988260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:11.472 [2024-10-07 14:47:34.988270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:11.472 [2024-10-07 14:47:34.988281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e480 is same with the state(6) to be set 00:37:11.472 [2024-10-07 14:47:34.988894] bdev_nvme.c:7146:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:37:11.472 [2024-10-07 14:47:34.988918] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:37:11.472 [2024-10-07 14:47:34.998144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e480 (9): Bad file descriptor 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:11.472 14:47:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:11.472 [2024-10-07 14:47:35.008184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:11.472 [2024-10-07 14:47:35.008558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.472 [2024-10-07 14:47:35.008584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e480 with addr=10.0.0.2, port=4420 00:37:11.472 [2024-10-07 14:47:35.008597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e480 is same with the state(6) to be set 00:37:11.472 [2024-10-07 14:47:35.008615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e480 (9): Bad file descriptor 00:37:11.472 [2024-10-07 14:47:35.008633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:11.472 [2024-10-07 14:47:35.008644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:11.472 [2024-10-07 14:47:35.008657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:11.472 [2024-10-07 14:47:35.008680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.472 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.472 [2024-10-07 14:47:35.018273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:11.472 [2024-10-07 14:47:35.018653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.472 [2024-10-07 14:47:35.018678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e480 with addr=10.0.0.2, port=4420 00:37:11.472 [2024-10-07 14:47:35.018690] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e480 is same with the state(6) to be set 00:37:11.472 [2024-10-07 14:47:35.018707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e480 (9): Bad file descriptor 00:37:11.472 [2024-10-07 14:47:35.018729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:11.472 [2024-10-07 14:47:35.018739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:11.472 [2024-10-07 14:47:35.018749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:11.472 [2024-10-07 14:47:35.018766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.472 [2024-10-07 14:47:35.028347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:11.472 [2024-10-07 14:47:35.028726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.472 [2024-10-07 14:47:35.028747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e480 with addr=10.0.0.2, port=4420 00:37:11.472 [2024-10-07 14:47:35.028758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e480 is same with the state(6) to be set 00:37:11.472 [2024-10-07 14:47:35.028774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e480 (9): Bad file descriptor 00:37:11.472 [2024-10-07 14:47:35.028789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:11.472 [2024-10-07 14:47:35.028798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:11.472 [2024-10-07 14:47:35.028808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:11.472 [2024-10-07 14:47:35.028824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.472 [2024-10-07 14:47:35.038420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:11.472 [2024-10-07 14:47:35.038805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.472 [2024-10-07 14:47:35.038827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e480 with addr=10.0.0.2, port=4420 00:37:11.472 [2024-10-07 14:47:35.038838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e480 is same with the state(6) to be set 00:37:11.472 [2024-10-07 14:47:35.038855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e480 (9): Bad file descriptor 00:37:11.472 [2024-10-07 14:47:35.038870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:11.472 [2024-10-07 14:47:35.038879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:11.472 [2024-10-07 14:47:35.038889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:11.472 [2024-10-07 14:47:35.038906] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.472 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:11.472 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:11.472 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:11.472 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:37:11.472 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:11.472 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:11.472 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:37:11.472 [2024-10-07 14:47:35.048502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:11.472 [2024-10-07 14:47:35.048839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.472 [2024-10-07 14:47:35.048860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e480 with addr=10.0.0.2, port=4420 00:37:11.472 [2024-10-07 14:47:35.048871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e480 is same with the state(6) to be set 00:37:11.472 [2024-10-07 14:47:35.048887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e480 (9): Bad file descriptor 00:37:11.472 [2024-10-07 14:47:35.048901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:11.472 [2024-10-07 14:47:35.048910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:11.472 [2024-10-07 14:47:35.048920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:11.473 [2024-10-07 14:47:35.048937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:11.473 [2024-10-07 14:47:35.059251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:11.473 [2024-10-07 14:47:35.059649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.473 [2024-10-07 14:47:35.059671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e480 with addr=10.0.0.2, port=4420 00:37:11.473 [2024-10-07 14:47:35.059682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e480 is same with the state(6) to be set 00:37:11.473 [2024-10-07 14:47:35.059698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e480 (9): Bad file descriptor 00:37:11.473 [2024-10-07 14:47:35.059713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:11.473 [2024-10-07 14:47:35.059722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:11.473 [2024-10-07 14:47:35.059732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:11.473 [2024-10-07 14:47:35.059749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.473 [2024-10-07 14:47:35.069330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:11.473 [2024-10-07 14:47:35.069684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.473 [2024-10-07 14:47:35.069705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e480 with addr=10.0.0.2, port=4420 00:37:11.473 [2024-10-07 14:47:35.069716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e480 is same with the state(6) to be set 00:37:11.473 [2024-10-07 14:47:35.069732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e480 (9): Bad file descriptor 00:37:11.473 [2024-10-07 14:47:35.069750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:11.473 [2024-10-07 14:47:35.069760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:11.473 [2024-10-07 14:47:35.069770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:11.473 [2024-10-07 14:47:35.069785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.473 [2024-10-07 14:47:35.079404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:11.473 [2024-10-07 14:47:35.079796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.473 [2024-10-07 14:47:35.079818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e480 with addr=10.0.0.2, port=4420 00:37:11.473 [2024-10-07 14:47:35.079829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e480 is same with the state(6) to be set 00:37:11.473 [2024-10-07 14:47:35.079845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e480 (9): Bad file descriptor 00:37:11.473 [2024-10-07 14:47:35.079859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:11.473 [2024-10-07 14:47:35.079869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:11.473 [2024-10-07 14:47:35.079879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:11.473 [2024-10-07 14:47:35.079895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.473 [2024-10-07 14:47:35.089480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:11.473 [2024-10-07 14:47:35.089835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.473 [2024-10-07 14:47:35.089856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e480 with addr=10.0.0.2, port=4420 00:37:11.473 [2024-10-07 14:47:35.089867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e480 is same with the state(6) to be set 00:37:11.473 [2024-10-07 14:47:35.089883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e480 (9): Bad file descriptor 00:37:11.473 [2024-10-07 14:47:35.089897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:11.473 [2024-10-07 14:47:35.089907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:11.473 [2024-10-07 14:47:35.089917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:11.473 [2024-10-07 14:47:35.089932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:37:11.473 [2024-10-07 14:47:35.099551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:37:11.473 [2024-10-07 14:47:35.099739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.473 [2024-10-07 14:47:35.099759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e480 with addr=10.0.0.2, port=4420 00:37:11.473 [2024-10-07 14:47:35.099770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e480 is same with the state(6) to be set 00:37:11.473 [2024-10-07 14:47:35.099792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e480 (9): Bad file descriptor 00:37:11.473 [2024-10-07 14:47:35.099806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:11.473 [2024-10-07 14:47:35.099818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:11.473 [2024-10-07 14:47:35.099829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:11.473 [2024-10-07 14:47:35.099845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:37:11.473 [2024-10-07 14:47:35.109621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:11.473 [2024-10-07 14:47:35.109965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:11.473 [2024-10-07 14:47:35.109986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e480 with addr=10.0.0.2, port=4420 00:37:11.473 [2024-10-07 14:47:35.109997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e480 is same with the state(6) to be set 00:37:11.473 [2024-10-07 14:47:35.110020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e480 (9): Bad file descriptor 00:37:11.473 [2024-10-07 14:47:35.110034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:11.473 [2024-10-07 14:47:35.110044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:37:11.473 [2024-10-07 14:47:35.110054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:11.473 [2024-10-07 14:47:35.110070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:37:11.473 [2024-10-07 14:47:35.118528] bdev_nvme.c:6951:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:37:11.473 [2024-10-07 14:47:35.118561] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:37:11.473 14:47:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:37:12.854 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.855 14:47:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:13.794 [2024-10-07 14:47:37.486021] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:37:13.794 [2024-10-07 14:47:37.486049] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:37:13.794 [2024-10-07 14:47:37.486078] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:37:14.053 [2024-10-07 14:47:37.573370] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:37:14.313 [2024-10-07 14:47:37.885855] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:37:14.313 [2024-10-07 14:47:37.885901] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:14.313 request: 00:37:14.313 { 00:37:14.313 "name": "nvme", 00:37:14.313 "trtype": "tcp", 00:37:14.313 "traddr": "10.0.0.2", 00:37:14.313 "adrfam": "ipv4", 00:37:14.313 "trsvcid": "8009", 00:37:14.313 "hostnqn": "nqn.2021-12.io.spdk:test", 00:37:14.313 "wait_for_attach": true, 00:37:14.313 "method": "bdev_nvme_start_discovery", 00:37:14.313 "req_id": 1 00:37:14.313 } 00:37:14.313 Got JSON-RPC error response 00:37:14.313 response: 00:37:14.313 { 00:37:14.313 "code": -17, 00:37:14.313 "message": "File exists" 00:37:14.313 } 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:14.313 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:14.314 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:14.314 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.314 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:14.314 14:47:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.314 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:14.314 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:14.314 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:37:14.314 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:14.314 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:14.314 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:14.314 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:14.314 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:14.314 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:37:14.314 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.314 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:14.574 request: 00:37:14.574 { 00:37:14.574 "name": "nvme_second", 00:37:14.574 "trtype": "tcp", 00:37:14.574 "traddr": "10.0.0.2", 00:37:14.574 "adrfam": "ipv4", 00:37:14.574 "trsvcid": "8009", 00:37:14.574 "hostnqn": "nqn.2021-12.io.spdk:test", 00:37:14.574 "wait_for_attach": true, 00:37:14.574 "method": "bdev_nvme_start_discovery", 00:37:14.574 "req_id": 1 00:37:14.574 } 00:37:14.574 Got JSON-RPC error response 00:37:14.574 response: 00:37:14.574 { 00:37:14.574 "code": -17, 00:37:14.574 "message": "File exists" 00:37:14.574 } 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.574 14:47:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:15.513 [2024-10-07 14:47:39.141724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.513 [2024-10-07 14:47:39.141772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a1400 with addr=10.0.0.2, port=8010 00:37:15.513 [2024-10-07 14:47:39.141819] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:15.513 [2024-10-07 14:47:39.141832] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:15.513 [2024-10-07 14:47:39.141844] bdev_nvme.c:7226:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:37:16.451 [2024-10-07 14:47:40.144198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.451 [2024-10-07 14:47:40.144247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a1680 with addr=10.0.0.2, port=8010 00:37:16.451 [2024-10-07 14:47:40.144295] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:16.451 [2024-10-07 14:47:40.144307] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:16.451 [2024-10-07 14:47:40.144319] bdev_nvme.c:7226:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:37:17.833 [2024-10-07 14:47:41.146058] bdev_nvme.c:7207:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:37:17.833 request: 00:37:17.833 { 00:37:17.833 "name": "nvme_second", 00:37:17.833 "trtype": "tcp", 00:37:17.833 "traddr": "10.0.0.2", 00:37:17.833 "adrfam": "ipv4", 00:37:17.833 "trsvcid": "8010", 00:37:17.833 "hostnqn": "nqn.2021-12.io.spdk:test", 00:37:17.833 "wait_for_attach": false, 00:37:17.833 "attach_timeout_ms": 3000, 00:37:17.833 "method": "bdev_nvme_start_discovery", 00:37:17.833 "req_id": 1 00:37:17.833 } 00:37:17.833 Got JSON-RPC error response 00:37:17.833 response: 00:37:17.833 { 00:37:17.833 "code": -110, 00:37:17.833 "message": "Connection timed out" 00:37:17.833 } 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3237320 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:17.833 rmmod nvme_tcp 00:37:17.833 rmmod nvme_fabrics 00:37:17.833 rmmod nvme_keyring 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 3237201 ']' 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 3237201 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 3237201 ']' 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 3237201 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3237201 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3237201' 00:37:17.833 killing process with pid 3237201 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 3237201 00:37:17.833 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 3237201 00:37:18.404 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:18.404 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:18.404 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:18.404 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:37:18.404 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:18.404 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:37:18.404 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:37:18.404 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:18.404 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:18.404 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:18.404 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:18.404 14:47:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.948 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:20.948 00:37:20.948 real 0m21.777s 00:37:20.948 user 0m26.212s 00:37:20.948 sys 0m7.495s 00:37:20.948 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:20.948 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:37:20.948 ************************************ 00:37:20.948 END TEST nvmf_host_discovery 00:37:20.948 ************************************ 00:37:20.948 14:47:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:37:20.948 14:47:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:20.948 14:47:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:20.948 14:47:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:20.948 ************************************ 00:37:20.948 START TEST nvmf_host_multipath_status 00:37:20.948 ************************************ 00:37:20.948 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:37:20.948 * Looking for test storage... 00:37:20.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:20.948 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:37:20.948 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:37:20.948 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:37:20.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.949 --rc genhtml_branch_coverage=1 00:37:20.949 --rc genhtml_function_coverage=1 00:37:20.949 --rc genhtml_legend=1 00:37:20.949 --rc geninfo_all_blocks=1 00:37:20.949 --rc geninfo_unexecuted_blocks=1 00:37:20.949 00:37:20.949 ' 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:37:20.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.949 --rc genhtml_branch_coverage=1 00:37:20.949 --rc genhtml_function_coverage=1 00:37:20.949 --rc genhtml_legend=1 00:37:20.949 --rc geninfo_all_blocks=1 00:37:20.949 --rc geninfo_unexecuted_blocks=1 00:37:20.949 00:37:20.949 ' 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:37:20.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.949 --rc genhtml_branch_coverage=1 00:37:20.949 --rc genhtml_function_coverage=1 00:37:20.949 --rc genhtml_legend=1 00:37:20.949 --rc geninfo_all_blocks=1 00:37:20.949 --rc geninfo_unexecuted_blocks=1 00:37:20.949 00:37:20.949 ' 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:37:20.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:20.949 --rc genhtml_branch_coverage=1 00:37:20.949 --rc genhtml_function_coverage=1 00:37:20.949 --rc genhtml_legend=1 00:37:20.949 --rc geninfo_all_blocks=1 00:37:20.949 --rc geninfo_unexecuted_blocks=1 00:37:20.949 00:37:20.949 ' 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:20.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:37:20.949 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:20.950 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:20.950 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:20.950 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:20.950 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:20.950 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:20.950 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:20.950 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.950 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:20.950 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:20.950 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:37:20.950 14:47:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:29.089 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:29.090 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:29.090 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:29.090 Found net devices under 0000:31:00.0: cvl_0_0 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:29.090 Found net devices under 0000:31:00.1: cvl_0_1 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:29.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:29.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:37:29.090 00:37:29.090 --- 10.0.0.2 ping statistics --- 00:37:29.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:29.090 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:29.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:29.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:37:29.090 00:37:29.090 --- 10.0.0.1 ping statistics --- 00:37:29.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:29.090 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=3243809 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 3243809 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3243809 ']' 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:29.090 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:29.091 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:29.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:29.091 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:29.091 14:47:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:29.091 [2024-10-07 14:47:51.842565] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:37:29.091 [2024-10-07 14:47:51.842663] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:29.091 [2024-10-07 14:47:51.968907] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:29.091 [2024-10-07 14:47:52.148461] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:29.091 [2024-10-07 14:47:52.148512] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:29.091 [2024-10-07 14:47:52.148524] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:29.091 [2024-10-07 14:47:52.148536] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:29.091 [2024-10-07 14:47:52.148545] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:29.091 [2024-10-07 14:47:52.150160] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:37:29.091 [2024-10-07 14:47:52.150308] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:29.091 14:47:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:29.091 14:47:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:37:29.091 14:47:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:29.091 14:47:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:29.091 14:47:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:29.091 14:47:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:29.091 14:47:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3243809 00:37:29.091 14:47:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:29.351 [2024-10-07 14:47:52.843954] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:29.351 14:47:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:37:29.611 Malloc0 00:37:29.611 14:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:37:29.611 14:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:29.871 14:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:29.871 [2024-10-07 14:47:53.577279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:30.131 14:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:37:30.131 [2024-10-07 14:47:53.745723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:37:30.131 14:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3244175 00:37:30.131 14:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:30.131 14:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:37:30.131 14:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3244175 /var/tmp/bdevperf.sock 00:37:30.131 14:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3244175 ']' 00:37:30.131 14:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:30.131 14:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:30.131 14:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:30.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:30.131 14:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:30.131 14:47:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:37:31.071 14:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:31.071 14:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:37:31.071 14:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:37:31.071 14:47:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:37:31.641 Nvme0n1 00:37:31.641 14:47:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:37:31.902 Nvme0n1 00:37:31.902 14:47:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:37:31.902 14:47:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:37:33.813 14:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:37:33.813 14:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:37:34.074 14:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:37:34.335 14:47:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:37:35.276 14:47:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:37:35.276 14:47:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:37:35.276 14:47:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:35.276 14:47:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:35.536 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:35.536 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:37:35.536 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:35.536 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:35.797 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:35.797 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:35.797 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:35.797 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:35.797 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:35.797 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:35.797 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:35.797 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:36.058 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:36.058 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:36.058 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:36.058 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:36.319 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:36.319 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:37:36.319 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:36.319 14:47:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:36.319 14:48:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:36.319 14:48:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:37:36.319 14:48:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:37:36.593 14:48:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:37:36.854 14:48:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:37:37.792 14:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:37:37.792 14:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:37:37.792 14:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:37.792 14:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:38.052 14:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:38.052 14:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:37:38.052 14:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:38.052 14:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:38.052 14:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:38.052 14:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:38.052 14:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:38.052 14:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:38.312 14:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:38.312 14:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:38.312 14:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:38.312 14:48:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:38.573 14:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:38.573 14:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:38.573 14:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:38.573 14:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:38.573 14:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:38.573 14:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:37:38.573 14:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:38.573 14:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:38.832 14:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:38.832 14:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:37:38.832 14:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:37:39.092 14:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:37:39.351 14:48:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:37:40.290 14:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:37:40.290 14:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:37:40.290 14:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:40.290 14:48:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:40.550 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:40.550 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:37:40.550 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:40.550 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:40.550 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:40.550 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:40.550 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:40.550 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:40.810 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:40.810 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:40.810 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:40.810 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:41.070 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:41.070 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:41.070 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:41.070 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:41.070 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:41.070 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:37:41.330 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:41.330 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:41.330 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:41.330 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:37:41.330 14:48:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:37:41.590 14:48:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:37:41.851 14:48:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:37:42.791 14:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:37:42.791 14:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:37:42.791 14:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:42.791 14:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:42.791 14:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:42.791 14:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:37:42.791 14:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:42.791 14:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:43.050 14:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:43.050 14:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:43.050 14:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:43.050 14:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:43.310 14:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:43.310 14:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:43.310 14:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:43.310 14:48:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:43.570 14:48:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:43.570 14:48:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:43.571 14:48:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:43.571 14:48:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:43.571 14:48:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:43.571 14:48:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:37:43.571 14:48:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:43.571 14:48:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:43.831 14:48:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:43.831 14:48:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:37:43.831 14:48:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:37:44.091 14:48:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:37:44.091 14:48:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:37:45.474 14:48:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:37:45.474 14:48:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:37:45.474 14:48:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:45.474 14:48:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:45.474 14:48:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:45.474 14:48:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:37:45.474 14:48:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:45.474 14:48:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:45.474 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:45.474 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:45.474 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:45.474 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:45.734 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:45.734 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:45.734 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:45.734 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:45.993 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:45.993 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:37:45.993 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:45.993 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:45.993 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:45.993 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:37:45.993 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:45.993 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:46.253 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:46.253 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:37:46.253 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:37:46.514 14:48:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:37:46.514 14:48:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:37:47.900 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:37:47.900 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:37:47.900 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:47.900 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:47.900 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:47.900 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:37:47.900 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:47.900 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:47.900 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:47.900 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:47.900 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:47.900 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:48.161 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:48.161 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:48.161 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:48.161 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:48.421 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:48.421 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:37:48.421 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:48.421 14:48:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:48.421 14:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:48.421 14:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:37:48.421 14:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:48.421 14:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:48.681 14:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:48.681 14:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:37:48.942 14:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:37:48.942 14:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:37:48.942 14:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:37:49.203 14:48:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:37:50.174 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:37:50.175 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:37:50.175 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:50.175 14:48:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:50.492 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:50.492 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:37:50.492 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:50.492 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:50.808 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:50.808 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:50.808 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:50.808 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:50.808 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:50.808 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:50.808 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:50.808 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:51.100 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:51.100 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:51.100 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:51.100 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:51.100 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:51.100 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:37:51.100 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:51.100 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:51.361 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:51.361 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:37:51.361 14:48:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:37:51.622 14:48:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:37:51.881 14:48:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:37:52.823 14:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:37:52.823 14:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:37:52.823 14:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:52.823 14:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:52.823 14:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:52.823 14:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:37:52.823 14:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:52.823 14:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:53.082 14:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:53.082 14:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:53.082 14:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:53.082 14:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:53.342 14:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:53.342 14:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:53.342 14:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:53.342 14:48:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:53.602 14:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:53.602 14:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:53.602 14:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:53.602 14:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:53.602 14:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:53.602 14:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:37:53.602 14:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:53.602 14:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:53.862 14:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:53.862 14:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:37:53.862 14:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:37:54.122 14:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:37:54.122 14:48:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:37:55.502 14:48:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:37:55.502 14:48:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:37:55.502 14:48:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:55.502 14:48:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:55.502 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:55.502 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:37:55.502 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:55.502 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:55.503 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:55.503 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:55.762 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:55.762 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:55.762 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:55.762 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:55.762 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:55.762 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:56.023 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:56.023 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:56.023 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:56.023 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:56.284 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:56.284 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:37:56.284 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:56.284 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:56.284 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:56.284 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:37:56.284 14:48:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:37:56.544 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:37:56.804 14:48:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:37:57.745 14:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:37:57.745 14:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:37:57.745 14:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:37:57.745 14:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:58.005 14:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:58.005 14:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:37:58.005 14:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:58.006 14:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:37:58.006 14:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:58.006 14:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:37:58.006 14:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:58.006 14:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:37:58.266 14:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:58.266 14:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:37:58.266 14:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:58.266 14:48:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:37:58.525 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:58.525 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:37:58.525 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:58.525 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:37:58.785 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:37:58.785 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:37:58.785 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:37:58.785 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:37:58.785 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:37:58.785 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3244175 00:37:58.785 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3244175 ']' 00:37:58.785 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3244175 00:37:58.785 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:37:58.785 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:58.785 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3244175 00:37:59.046 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:37:59.046 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:37:59.046 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3244175' 00:37:59.046 killing process with pid 3244175 00:37:59.046 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3244175 00:37:59.046 14:48:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3244175 00:37:59.046 { 00:37:59.046 "results": [ 00:37:59.046 { 00:37:59.046 "job": "Nvme0n1", 00:37:59.046 "core_mask": "0x4", 00:37:59.046 "workload": "verify", 00:37:59.046 "status": "terminated", 00:37:59.046 "verify_range": { 00:37:59.046 "start": 0, 00:37:59.046 "length": 16384 00:37:59.046 }, 00:37:59.046 "queue_depth": 128, 00:37:59.046 "io_size": 4096, 00:37:59.046 "runtime": 26.870159, 00:37:59.046 "iops": 9657.553570859034, 00:37:59.046 "mibps": 37.7248186361681, 00:37:59.046 "io_failed": 0, 00:37:59.046 "io_timeout": 0, 00:37:59.046 "avg_latency_us": 13234.662603879255, 00:37:59.046 "min_latency_us": 320.85333333333335, 00:37:59.046 "max_latency_us": 3019898.88 00:37:59.046 } 00:37:59.046 ], 00:37:59.046 "core_count": 1 00:37:59.046 } 00:37:59.621 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3244175 00:37:59.621 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:37:59.621 [2024-10-07 14:47:53.838296] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:37:59.621 [2024-10-07 14:47:53.838416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3244175 ] 00:37:59.621 [2024-10-07 14:47:53.939995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:59.621 [2024-10-07 14:47:54.076686] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:37:59.621 [2024-10-07 14:47:55.425044] bdev_nvme.c:5607:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:37:59.621 Running I/O for 90 seconds... 00:37:59.621 8412.00 IOPS, 32.86 MiB/s [2024-10-07T12:48:23.330Z] 8505.00 IOPS, 33.22 MiB/s [2024-10-07T12:48:23.330Z] 8523.33 IOPS, 33.29 MiB/s [2024-10-07T12:48:23.330Z] 8517.50 IOPS, 33.27 MiB/s [2024-10-07T12:48:23.330Z] 8772.40 IOPS, 34.27 MiB/s [2024-10-07T12:48:23.330Z] 9235.67 IOPS, 36.08 MiB/s [2024-10-07T12:48:23.330Z] 9566.86 IOPS, 37.37 MiB/s [2024-10-07T12:48:23.330Z] 9531.50 IOPS, 37.23 MiB/s [2024-10-07T12:48:23.330Z] 9413.89 IOPS, 36.77 MiB/s [2024-10-07T12:48:23.330Z] 9331.30 IOPS, 36.45 MiB/s [2024-10-07T12:48:23.330Z] 9261.00 IOPS, 36.18 MiB/s [2024-10-07T12:48:23.330Z] [2024-10-07 14:48:07.558232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.621 [2024-10-07 14:48:07.558280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:59.621 [2024-10-07 14:48:07.558333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.621 [2024-10-07 14:48:07.558344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:37:59.621 [2024-10-07 14:48:07.558359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.621 [2024-10-07 14:48:07.558367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:59.621 [2024-10-07 14:48:07.558381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.621 [2024-10-07 14:48:07.558389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.558954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.558962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.559013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.559023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.559039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.559047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.559062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.559070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.559085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.559093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.559115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.559123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.559138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.559146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.559161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.559169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.559184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.622 [2024-10-07 14:48:07.559192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:59.622 [2024-10-07 14:48:07.559532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.559544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.559561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.559571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.559587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.559595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.559611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.559618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.559634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.559642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.559657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.559665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.559680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.559689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.559705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.559713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.559831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.559842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.559859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.559867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.559883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.559891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.559906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.559914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.559929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.559937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.559953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.559963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.559979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.559987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.560008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.560017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.560134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.560144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.560161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.560169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.560185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.560192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.560208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.560216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.560232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.560239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.560255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.560263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.560279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.560286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.560302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.560311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.560534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.560544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.560562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.560570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.560588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.560596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.560612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.560620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.560637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.623 [2024-10-07 14:48:07.560645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:59.623 [2024-10-07 14:48:07.560661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.560669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.560685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.560693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.560710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.560717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.560886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.560895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.560913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.560921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.560937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.560945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.560962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.560969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.560986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.560994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.624 [2024-10-07 14:48:07.561533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.624 [2024-10-07 14:48:07.561558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.561818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.561826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.562016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.562026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.562045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.562053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.562085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.562093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:59.624 [2024-10-07 14:48:07.562111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.624 [2024-10-07 14:48:07.562119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.562138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.625 [2024-10-07 14:48:07.562148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.562166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.625 [2024-10-07 14:48:07.562174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.562192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.625 [2024-10-07 14:48:07.562200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.562218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.625 [2024-10-07 14:48:07.562226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.562945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.625 [2024-10-07 14:48:07.562960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.562981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.625 [2024-10-07 14:48:07.562989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.625 [2024-10-07 14:48:07.563021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.625 [2024-10-07 14:48:07.563048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.625 [2024-10-07 14:48:07.563075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.625 [2024-10-07 14:48:07.563102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.625 [2024-10-07 14:48:07.563128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.625 [2024-10-07 14:48:07.563154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.625 [2024-10-07 14:48:07.563180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.625 [2024-10-07 14:48:07.563209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.625 [2024-10-07 14:48:07.563235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.625 [2024-10-07 14:48:07.563261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.625 [2024-10-07 14:48:07.563288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.625 [2024-10-07 14:48:07.563315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.625 [2024-10-07 14:48:07.563340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.625 [2024-10-07 14:48:07.563367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.625 [2024-10-07 14:48:07.563392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.625 [2024-10-07 14:48:07.563462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.625 [2024-10-07 14:48:07.563492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.625 [2024-10-07 14:48:07.563519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.625 [2024-10-07 14:48:07.563546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.625 [2024-10-07 14:48:07.563576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.625 [2024-10-07 14:48:07.563603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.625 [2024-10-07 14:48:07.563631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:59.625 [2024-10-07 14:48:07.563650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.625 [2024-10-07 14:48:07.563657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:59.626 [2024-10-07 14:48:07.563676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.626 [2024-10-07 14:48:07.563684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:37:59.626 [2024-10-07 14:48:07.563704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.626 [2024-10-07 14:48:07.563711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:59.626 [2024-10-07 14:48:07.563731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.626 [2024-10-07 14:48:07.563739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:37:59.626 [2024-10-07 14:48:07.563758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.626 [2024-10-07 14:48:07.563765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:59.626 [2024-10-07 14:48:07.563784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.626 [2024-10-07 14:48:07.563793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:59.626 9145.67 IOPS, 35.73 MiB/s [2024-10-07T12:48:23.335Z] 8442.15 IOPS, 32.98 MiB/s [2024-10-07T12:48:23.335Z] 7839.14 IOPS, 30.62 MiB/s [2024-10-07T12:48:23.335Z] 7377.87 IOPS, 28.82 MiB/s [2024-10-07T12:48:23.335Z] 7645.12 IOPS, 29.86 MiB/s [2024-10-07T12:48:23.335Z] 7877.82 IOPS, 30.77 MiB/s [2024-10-07T12:48:23.335Z] 8281.39 IOPS, 32.35 MiB/s [2024-10-07T12:48:23.335Z] 8650.11 IOPS, 33.79 MiB/s [2024-10-07T12:48:23.335Z] 8895.30 IOPS, 34.75 MiB/s [2024-10-07T12:48:23.335Z] 9017.14 IOPS, 35.22 MiB/s [2024-10-07T12:48:23.335Z] 9132.00 IOPS, 35.67 MiB/s [2024-10-07T12:48:23.335Z] 9368.43 IOPS, 36.60 MiB/s [2024-10-07T12:48:23.335Z] 9613.46 IOPS, 37.55 MiB/s [2024-10-07T12:48:23.335Z] [2024-10-07 14:48:20.310651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.626 [2024-10-07 14:48:20.310701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:59.626 [2024-10-07 14:48:20.310745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.626 [2024-10-07 14:48:20.310755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:37:59.626 [2024-10-07 14:48:20.310776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.626 [2024-10-07 14:48:20.310784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:59.626 [2024-10-07 14:48:20.310961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.626 [2024-10-07 14:48:20.310975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:59.626 [2024-10-07 14:48:20.310991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.626 [2024-10-07 14:48:20.311006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:59.626 [2024-10-07 14:48:20.311022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.626 [2024-10-07 14:48:20.311030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:59.626 [2024-10-07 14:48:20.311044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.626 [2024-10-07 14:48:20.311054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:59.626 [2024-10-07 14:48:20.311972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.626 [2024-10-07 14:48:20.311993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:59.626 [2024-10-07 14:48:20.312017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.626 [2024-10-07 14:48:20.312026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:59.626 9743.60 IOPS, 38.06 MiB/s [2024-10-07T12:48:23.335Z] 9698.42 IOPS, 37.88 MiB/s [2024-10-07T12:48:23.335Z] Received shutdown signal, test time was about 26.870794 seconds 00:37:59.626 00:37:59.626 Latency(us) 00:37:59.626 [2024-10-07T12:48:23.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:59.626 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:37:59.626 Verification LBA range: start 0x0 length 0x4000 00:37:59.626 Nvme0n1 : 26.87 9657.55 37.72 0.00 0.00 13234.66 320.85 3019898.88 00:37:59.626 [2024-10-07T12:48:23.335Z] =================================================================================================================== 00:37:59.626 [2024-10-07T12:48:23.335Z] Total : 9657.55 37.72 0.00 0.00 13234.66 320.85 3019898.88 00:37:59.626 [2024-10-07 14:48:22.538844] app.c:1033:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:37:59.626 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:59.626 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:37:59.626 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:37:59.626 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:37:59.626 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:59.626 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:37:59.626 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:59.626 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:37:59.626 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:59.626 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:59.626 rmmod nvme_tcp 00:37:59.626 rmmod nvme_fabrics 00:37:59.626 rmmod nvme_keyring 00:37:59.626 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:59.626 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:37:59.626 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:37:59.626 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 3243809 ']' 00:37:59.626 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 3243809 00:37:59.626 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3243809 ']' 00:37:59.626 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3243809 00:37:59.626 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:37:59.627 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:59.627 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3243809 00:37:59.887 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:59.887 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:59.887 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3243809' 00:37:59.887 killing process with pid 3243809 00:37:59.887 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3243809 00:37:59.887 14:48:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3243809 00:38:00.828 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:00.828 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:00.828 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:00.828 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:38:00.828 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:38:00.828 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:00.828 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:38:00.828 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:00.828 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:00.828 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:00.828 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:00.828 14:48:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:02.737 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:02.737 00:38:02.737 real 0m42.258s 00:38:02.737 user 1m48.288s 00:38:02.737 sys 0m11.694s 00:38:02.737 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:02.737 14:48:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:38:02.737 ************************************ 00:38:02.737 END TEST nvmf_host_multipath_status 00:38:02.737 ************************************ 00:38:02.737 14:48:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:38:02.737 14:48:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:02.737 14:48:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:02.737 14:48:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:02.998 ************************************ 00:38:02.998 START TEST nvmf_discovery_remove_ifc 00:38:02.998 ************************************ 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:38:02.998 * Looking for test storage... 00:38:02.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:02.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.998 --rc genhtml_branch_coverage=1 00:38:02.998 --rc genhtml_function_coverage=1 00:38:02.998 --rc genhtml_legend=1 00:38:02.998 --rc geninfo_all_blocks=1 00:38:02.998 --rc geninfo_unexecuted_blocks=1 00:38:02.998 00:38:02.998 ' 00:38:02.998 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:02.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.998 --rc genhtml_branch_coverage=1 00:38:02.998 --rc genhtml_function_coverage=1 00:38:02.998 --rc genhtml_legend=1 00:38:02.998 --rc geninfo_all_blocks=1 00:38:02.998 --rc geninfo_unexecuted_blocks=1 00:38:02.998 00:38:02.998 ' 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:02.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.999 --rc genhtml_branch_coverage=1 00:38:02.999 --rc genhtml_function_coverage=1 00:38:02.999 --rc genhtml_legend=1 00:38:02.999 --rc geninfo_all_blocks=1 00:38:02.999 --rc geninfo_unexecuted_blocks=1 00:38:02.999 00:38:02.999 ' 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:02.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:02.999 --rc genhtml_branch_coverage=1 00:38:02.999 --rc genhtml_function_coverage=1 00:38:02.999 --rc genhtml_legend=1 00:38:02.999 --rc geninfo_all_blocks=1 00:38:02.999 --rc geninfo_unexecuted_blocks=1 00:38:02.999 00:38:02.999 ' 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:02.999 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:03.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:38:03.260 14:48:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:11.409 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:11.410 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:11.410 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:11.410 Found net devices under 0000:31:00.0: cvl_0_0 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:11.410 Found net devices under 0000:31:00.1: cvl_0_1 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:11.410 14:48:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:11.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:11.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:38:11.410 00:38:11.410 --- 10.0.0.2 ping statistics --- 00:38:11.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:11.410 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:11.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:11.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:38:11.410 00:38:11.410 --- 10.0.0.1 ping statistics --- 00:38:11.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:11.410 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=3254843 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 3254843 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3254843 ']' 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:11.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:11.410 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:11.411 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:11.411 [2024-10-07 14:48:34.172558] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:38:11.411 [2024-10-07 14:48:34.172678] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:11.411 [2024-10-07 14:48:34.330181] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.411 [2024-10-07 14:48:34.550866] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:11.411 [2024-10-07 14:48:34.550946] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:11.411 [2024-10-07 14:48:34.550959] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:11.411 [2024-10-07 14:48:34.550973] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:11.411 [2024-10-07 14:48:34.550984] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:11.411 [2024-10-07 14:48:34.552479] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:38:11.411 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:11.411 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:38:11.411 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:11.411 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:11.411 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:11.411 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:11.411 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:38:11.411 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:11.411 14:48:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:11.411 [2024-10-07 14:48:34.978540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:11.411 [2024-10-07 14:48:34.986744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:38:11.411 null0 00:38:11.411 [2024-10-07 14:48:35.018726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:11.411 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:11.411 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3255038 00:38:11.411 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3255038 /tmp/host.sock 00:38:11.411 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:38:11.411 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3255038 ']' 00:38:11.411 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:38:11.411 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:11.411 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:38:11.411 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:38:11.411 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:11.411 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:11.672 [2024-10-07 14:48:35.131493] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:38:11.672 [2024-10-07 14:48:35.131606] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255038 ] 00:38:11.672 [2024-10-07 14:48:35.271989] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.932 [2024-10-07 14:48:35.450029] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:12.193 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:12.193 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:38:12.193 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:12.193 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:38:12.193 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.193 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:12.193 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.193 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:38:12.193 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.193 14:48:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:12.453 14:48:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.453 14:48:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:38:12.453 14:48:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.453 14:48:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:13.834 [2024-10-07 14:48:37.128018] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:38:13.834 [2024-10-07 14:48:37.128052] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:38:13.834 [2024-10-07 14:48:37.128080] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:38:13.834 [2024-10-07 14:48:37.256521] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:38:13.834 [2024-10-07 14:48:37.485394] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:38:13.834 [2024-10-07 14:48:37.485465] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:38:13.834 [2024-10-07 14:48:37.485520] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:38:13.834 [2024-10-07 14:48:37.485543] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:38:13.834 [2024-10-07 14:48:37.485579] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:38:13.834 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.834 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:38:13.834 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:13.834 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:13.834 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.834 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:13.834 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:13.834 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:13.834 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:13.834 [2024-10-07 14:48:37.501458] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500039ee80 was disconnected and freed. delete nvme_qpair. 00:38:13.834 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.834 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:38:13.834 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:38:13.834 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:38:14.094 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:38:14.094 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:14.094 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:14.094 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:14.094 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.094 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:14.094 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:14.094 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:14.094 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.094 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:14.094 14:48:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:15.035 14:48:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:15.035 14:48:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:15.035 14:48:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:15.035 14:48:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.035 14:48:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:15.035 14:48:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:15.035 14:48:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:15.295 14:48:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.295 14:48:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:15.295 14:48:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:16.238 14:48:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:16.238 14:48:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:16.238 14:48:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:16.238 14:48:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.238 14:48:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:16.238 14:48:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:16.238 14:48:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:16.238 14:48:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.238 14:48:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:16.238 14:48:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:17.179 14:48:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:17.179 14:48:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:17.179 14:48:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:17.179 14:48:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.180 14:48:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:17.180 14:48:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:17.180 14:48:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:17.180 14:48:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.180 14:48:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:17.180 14:48:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:18.564 14:48:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:18.564 14:48:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:18.564 14:48:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:18.564 14:48:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.564 14:48:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:18.564 14:48:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:18.564 14:48:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:18.564 14:48:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.564 14:48:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:18.564 14:48:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:19.507 [2024-10-07 14:48:42.925689] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:38:19.507 [2024-10-07 14:48:42.925756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:19.507 [2024-10-07 14:48:42.925774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:19.507 [2024-10-07 14:48:42.925791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:19.507 [2024-10-07 14:48:42.925802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:19.507 [2024-10-07 14:48:42.925814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:19.507 [2024-10-07 14:48:42.925825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:19.507 [2024-10-07 14:48:42.925836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:19.507 [2024-10-07 14:48:42.925847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:19.507 [2024-10-07 14:48:42.925859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:38:19.507 [2024-10-07 14:48:42.925870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:19.507 [2024-10-07 14:48:42.925880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e700 is same with the state(6) to be set 00:38:19.507 [2024-10-07 14:48:42.935705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e700 (9): Bad file descriptor 00:38:19.507 [2024-10-07 14:48:42.945759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:38:19.507 14:48:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:19.507 14:48:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:19.507 14:48:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:19.507 14:48:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.507 14:48:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:19.507 14:48:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:19.507 14:48:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:20.448 [2024-10-07 14:48:43.965027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:38:20.448 [2024-10-07 14:48:43.965077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e700 with addr=10.0.0.2, port=4420 00:38:20.448 [2024-10-07 14:48:43.965095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e700 is same with the state(6) to be set 00:38:20.448 [2024-10-07 14:48:43.965127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e700 (9): Bad file descriptor 00:38:20.449 [2024-10-07 14:48:43.965637] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:38:20.449 [2024-10-07 14:48:43.965671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:20.449 [2024-10-07 14:48:43.965687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:38:20.449 [2024-10-07 14:48:43.965701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:20.449 [2024-10-07 14:48:43.965727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:20.449 [2024-10-07 14:48:43.965738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:38:20.449 14:48:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.449 14:48:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:38:20.449 14:48:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:21.391 [2024-10-07 14:48:44.968126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:21.391 [2024-10-07 14:48:44.968158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:21.391 [2024-10-07 14:48:44.968169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:38:21.391 [2024-10-07 14:48:44.968179] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:38:21.391 [2024-10-07 14:48:44.968199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:38:21.391 [2024-10-07 14:48:44.968229] bdev_nvme.c:6915:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:38:21.391 [2024-10-07 14:48:44.968265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:21.391 [2024-10-07 14:48:44.968281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.391 [2024-10-07 14:48:44.968302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:21.391 [2024-10-07 14:48:44.968318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.391 [2024-10-07 14:48:44.968330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:21.391 [2024-10-07 14:48:44.968341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.391 [2024-10-07 14:48:44.968353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:21.391 [2024-10-07 14:48:44.968364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.391 [2024-10-07 14:48:44.968376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:38:21.391 [2024-10-07 14:48:44.968387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.391 [2024-10-07 14:48:44.968398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:38:21.391 [2024-10-07 14:48:44.968802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039df80 (9): Bad file descriptor 00:38:21.391 [2024-10-07 14:48:44.969822] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:38:21.391 [2024-10-07 14:48:44.969845] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:38:21.391 14:48:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:21.391 14:48:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:21.391 14:48:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:21.391 14:48:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:21.391 14:48:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.391 14:48:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:21.391 14:48:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:21.391 14:48:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.391 14:48:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:38:21.391 14:48:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:21.391 14:48:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:21.652 14:48:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:38:21.652 14:48:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:21.652 14:48:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:21.652 14:48:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:21.652 14:48:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:21.652 14:48:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.652 14:48:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:21.652 14:48:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:21.652 14:48:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.652 14:48:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:38:21.652 14:48:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:22.594 14:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:22.594 14:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:22.594 14:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:22.594 14:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:22.594 14:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:22.594 14:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:22.594 14:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:22.594 14:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:22.594 14:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:38:22.594 14:48:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:38:23.536 [2024-10-07 14:48:47.028213] bdev_nvme.c:7164:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:38:23.536 [2024-10-07 14:48:47.028239] bdev_nvme.c:7244:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:38:23.536 [2024-10-07 14:48:47.028274] bdev_nvme.c:7127:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:38:23.536 [2024-10-07 14:48:47.156716] bdev_nvme.c:7093:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:38:23.536 [2024-10-07 14:48:47.218560] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:38:23.536 [2024-10-07 14:48:47.218621] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:38:23.536 [2024-10-07 14:48:47.218666] bdev_nvme.c:7954:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:38:23.536 [2024-10-07 14:48:47.218688] bdev_nvme.c:6983:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:38:23.536 [2024-10-07 14:48:47.218703] bdev_nvme.c:6942:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:38:23.536 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:38:23.536 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:38:23.536 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:38:23.536 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.536 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:38:23.536 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:23.536 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:38:23.797 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.797 [2024-10-07 14:48:47.268031] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150003a0500 was disconnected and freed. delete nvme_qpair. 00:38:23.797 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:38:23.797 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:38:23.797 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3255038 00:38:23.797 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3255038 ']' 00:38:23.797 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3255038 00:38:23.797 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:38:23.797 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:23.797 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3255038 00:38:23.797 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:23.797 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:23.797 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3255038' 00:38:23.797 killing process with pid 3255038 00:38:23.797 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3255038 00:38:23.797 14:48:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3255038 00:38:24.368 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:38:24.368 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:24.368 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:38:24.629 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:24.629 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:38:24.629 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:24.629 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:24.629 rmmod nvme_tcp 00:38:24.629 rmmod nvme_fabrics 00:38:24.629 rmmod nvme_keyring 00:38:24.629 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:24.629 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:38:24.630 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:38:24.630 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 3254843 ']' 00:38:24.630 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 3254843 00:38:24.630 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3254843 ']' 00:38:24.630 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3254843 00:38:24.630 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:38:24.630 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:24.630 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3254843 00:38:24.630 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:24.630 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:24.630 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3254843' 00:38:24.630 killing process with pid 3254843 00:38:24.630 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3254843 00:38:24.630 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3254843 00:38:25.201 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:25.201 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:25.201 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:25.201 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:38:25.201 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:38:25.201 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:25.201 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:38:25.201 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:25.201 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:25.201 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.201 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:25.201 14:48:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.745 14:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:27.745 00:38:27.745 real 0m24.453s 00:38:27.745 user 0m29.042s 00:38:27.745 sys 0m7.072s 00:38:27.745 14:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:27.745 14:48:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:38:27.745 ************************************ 00:38:27.745 END TEST nvmf_discovery_remove_ifc 00:38:27.745 ************************************ 00:38:27.745 14:48:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:38:27.745 14:48:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:27.745 14:48:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:27.745 14:48:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:27.745 ************************************ 00:38:27.745 START TEST nvmf_identify_kernel_target 00:38:27.745 ************************************ 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:38:27.745 * Looking for test storage... 00:38:27.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:27.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.745 --rc genhtml_branch_coverage=1 00:38:27.745 --rc genhtml_function_coverage=1 00:38:27.745 --rc genhtml_legend=1 00:38:27.745 --rc geninfo_all_blocks=1 00:38:27.745 --rc geninfo_unexecuted_blocks=1 00:38:27.745 00:38:27.745 ' 00:38:27.745 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:27.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.746 --rc genhtml_branch_coverage=1 00:38:27.746 --rc genhtml_function_coverage=1 00:38:27.746 --rc genhtml_legend=1 00:38:27.746 --rc geninfo_all_blocks=1 00:38:27.746 --rc geninfo_unexecuted_blocks=1 00:38:27.746 00:38:27.746 ' 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:27.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.746 --rc genhtml_branch_coverage=1 00:38:27.746 --rc genhtml_function_coverage=1 00:38:27.746 --rc genhtml_legend=1 00:38:27.746 --rc geninfo_all_blocks=1 00:38:27.746 --rc geninfo_unexecuted_blocks=1 00:38:27.746 00:38:27.746 ' 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:27.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.746 --rc genhtml_branch_coverage=1 00:38:27.746 --rc genhtml_function_coverage=1 00:38:27.746 --rc genhtml_legend=1 00:38:27.746 --rc geninfo_all_blocks=1 00:38:27.746 --rc geninfo_unexecuted_blocks=1 00:38:27.746 00:38:27.746 ' 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:27.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:38:27.746 14:48:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:35.888 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:35.888 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:35.888 Found net devices under 0000:31:00.0: cvl_0_0 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:35.888 Found net devices under 0000:31:00.1: cvl_0_1 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:35.888 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:35.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:35.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:38:35.889 00:38:35.889 --- 10.0.0.2 ping statistics --- 00:38:35.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:35.889 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:35.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:35.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:38:35.889 00:38:35.889 --- 10.0.0.1 ping statistics --- 00:38:35.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:35.889 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:35.889 14:48:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:39.187 Waiting for block devices as requested 00:38:39.187 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:39.187 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:39.187 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:39.187 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:39.187 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:39.187 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:39.187 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:39.187 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:39.187 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:39.449 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:39.449 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:39.711 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:39.711 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:39.711 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:39.711 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:39.971 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:39.971 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:40.232 No valid GPT data, bailing 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:40.232 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:38:40.493 00:38:40.493 Discovery Log Number of Records 2, Generation counter 2 00:38:40.493 =====Discovery Log Entry 0====== 00:38:40.493 trtype: tcp 00:38:40.493 adrfam: ipv4 00:38:40.493 subtype: current discovery subsystem 00:38:40.493 treq: not specified, sq flow control disable supported 00:38:40.493 portid: 1 00:38:40.493 trsvcid: 4420 00:38:40.493 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:40.493 traddr: 10.0.0.1 00:38:40.493 eflags: none 00:38:40.493 sectype: none 00:38:40.493 =====Discovery Log Entry 1====== 00:38:40.493 trtype: tcp 00:38:40.493 adrfam: ipv4 00:38:40.493 subtype: nvme subsystem 00:38:40.493 treq: not specified, sq flow control disable supported 00:38:40.493 portid: 1 00:38:40.493 trsvcid: 4420 00:38:40.493 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:40.493 traddr: 10.0.0.1 00:38:40.493 eflags: none 00:38:40.493 sectype: none 00:38:40.493 14:49:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:38:40.493 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:38:40.493 ===================================================== 00:38:40.493 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:38:40.493 ===================================================== 00:38:40.493 Controller Capabilities/Features 00:38:40.493 ================================ 00:38:40.493 Vendor ID: 0000 00:38:40.493 Subsystem Vendor ID: 0000 00:38:40.493 Serial Number: ab97fd91a41c5a1698ba 00:38:40.494 Model Number: Linux 00:38:40.494 Firmware Version: 6.8.9-20 00:38:40.494 Recommended Arb Burst: 0 00:38:40.494 IEEE OUI Identifier: 00 00 00 00:38:40.494 Multi-path I/O 00:38:40.494 May have multiple subsystem ports: No 00:38:40.494 May have multiple controllers: No 00:38:40.494 Associated with SR-IOV VF: No 00:38:40.494 Max Data Transfer Size: Unlimited 00:38:40.494 Max Number of Namespaces: 0 00:38:40.494 Max Number of I/O Queues: 1024 00:38:40.494 NVMe Specification Version (VS): 1.3 00:38:40.494 NVMe Specification Version (Identify): 1.3 00:38:40.494 Maximum Queue Entries: 1024 00:38:40.494 Contiguous Queues Required: No 00:38:40.494 Arbitration Mechanisms Supported 00:38:40.494 Weighted Round Robin: Not Supported 00:38:40.494 Vendor Specific: Not Supported 00:38:40.494 Reset Timeout: 7500 ms 00:38:40.494 Doorbell Stride: 4 bytes 00:38:40.494 NVM Subsystem Reset: Not Supported 00:38:40.494 Command Sets Supported 00:38:40.494 NVM Command Set: Supported 00:38:40.494 Boot Partition: Not Supported 00:38:40.494 Memory Page Size Minimum: 4096 bytes 00:38:40.494 Memory Page Size Maximum: 4096 bytes 00:38:40.494 Persistent Memory Region: Not Supported 00:38:40.494 Optional Asynchronous Events Supported 00:38:40.494 Namespace Attribute Notices: Not Supported 00:38:40.494 Firmware Activation Notices: Not Supported 00:38:40.494 ANA Change Notices: Not Supported 00:38:40.494 PLE Aggregate Log Change Notices: Not Supported 00:38:40.494 LBA Status Info Alert Notices: Not Supported 00:38:40.494 EGE Aggregate Log Change Notices: Not Supported 00:38:40.494 Normal NVM Subsystem Shutdown event: Not Supported 00:38:40.494 Zone Descriptor Change Notices: Not Supported 00:38:40.494 Discovery Log Change Notices: Supported 00:38:40.494 Controller Attributes 00:38:40.494 128-bit Host Identifier: Not Supported 00:38:40.494 Non-Operational Permissive Mode: Not Supported 00:38:40.494 NVM Sets: Not Supported 00:38:40.494 Read Recovery Levels: Not Supported 00:38:40.494 Endurance Groups: Not Supported 00:38:40.494 Predictable Latency Mode: Not Supported 00:38:40.494 Traffic Based Keep ALive: Not Supported 00:38:40.494 Namespace Granularity: Not Supported 00:38:40.494 SQ Associations: Not Supported 00:38:40.494 UUID List: Not Supported 00:38:40.494 Multi-Domain Subsystem: Not Supported 00:38:40.494 Fixed Capacity Management: Not Supported 00:38:40.494 Variable Capacity Management: Not Supported 00:38:40.494 Delete Endurance Group: Not Supported 00:38:40.494 Delete NVM Set: Not Supported 00:38:40.494 Extended LBA Formats Supported: Not Supported 00:38:40.494 Flexible Data Placement Supported: Not Supported 00:38:40.494 00:38:40.494 Controller Memory Buffer Support 00:38:40.494 ================================ 00:38:40.494 Supported: No 00:38:40.494 00:38:40.494 Persistent Memory Region Support 00:38:40.494 ================================ 00:38:40.494 Supported: No 00:38:40.494 00:38:40.494 Admin Command Set Attributes 00:38:40.494 ============================ 00:38:40.494 Security Send/Receive: Not Supported 00:38:40.494 Format NVM: Not Supported 00:38:40.494 Firmware Activate/Download: Not Supported 00:38:40.494 Namespace Management: Not Supported 00:38:40.494 Device Self-Test: Not Supported 00:38:40.494 Directives: Not Supported 00:38:40.494 NVMe-MI: Not Supported 00:38:40.494 Virtualization Management: Not Supported 00:38:40.494 Doorbell Buffer Config: Not Supported 00:38:40.494 Get LBA Status Capability: Not Supported 00:38:40.494 Command & Feature Lockdown Capability: Not Supported 00:38:40.494 Abort Command Limit: 1 00:38:40.494 Async Event Request Limit: 1 00:38:40.494 Number of Firmware Slots: N/A 00:38:40.494 Firmware Slot 1 Read-Only: N/A 00:38:40.494 Firmware Activation Without Reset: N/A 00:38:40.494 Multiple Update Detection Support: N/A 00:38:40.494 Firmware Update Granularity: No Information Provided 00:38:40.494 Per-Namespace SMART Log: No 00:38:40.494 Asymmetric Namespace Access Log Page: Not Supported 00:38:40.494 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:38:40.494 Command Effects Log Page: Not Supported 00:38:40.494 Get Log Page Extended Data: Supported 00:38:40.494 Telemetry Log Pages: Not Supported 00:38:40.494 Persistent Event Log Pages: Not Supported 00:38:40.494 Supported Log Pages Log Page: May Support 00:38:40.494 Commands Supported & Effects Log Page: Not Supported 00:38:40.494 Feature Identifiers & Effects Log Page:May Support 00:38:40.494 NVMe-MI Commands & Effects Log Page: May Support 00:38:40.494 Data Area 4 for Telemetry Log: Not Supported 00:38:40.494 Error Log Page Entries Supported: 1 00:38:40.494 Keep Alive: Not Supported 00:38:40.494 00:38:40.494 NVM Command Set Attributes 00:38:40.494 ========================== 00:38:40.494 Submission Queue Entry Size 00:38:40.494 Max: 1 00:38:40.494 Min: 1 00:38:40.494 Completion Queue Entry Size 00:38:40.494 Max: 1 00:38:40.494 Min: 1 00:38:40.495 Number of Namespaces: 0 00:38:40.495 Compare Command: Not Supported 00:38:40.495 Write Uncorrectable Command: Not Supported 00:38:40.495 Dataset Management Command: Not Supported 00:38:40.495 Write Zeroes Command: Not Supported 00:38:40.495 Set Features Save Field: Not Supported 00:38:40.495 Reservations: Not Supported 00:38:40.495 Timestamp: Not Supported 00:38:40.495 Copy: Not Supported 00:38:40.495 Volatile Write Cache: Not Present 00:38:40.495 Atomic Write Unit (Normal): 1 00:38:40.495 Atomic Write Unit (PFail): 1 00:38:40.495 Atomic Compare & Write Unit: 1 00:38:40.495 Fused Compare & Write: Not Supported 00:38:40.495 Scatter-Gather List 00:38:40.495 SGL Command Set: Supported 00:38:40.495 SGL Keyed: Not Supported 00:38:40.495 SGL Bit Bucket Descriptor: Not Supported 00:38:40.495 SGL Metadata Pointer: Not Supported 00:38:40.495 Oversized SGL: Not Supported 00:38:40.495 SGL Metadata Address: Not Supported 00:38:40.495 SGL Offset: Supported 00:38:40.495 Transport SGL Data Block: Not Supported 00:38:40.495 Replay Protected Memory Block: Not Supported 00:38:40.495 00:38:40.495 Firmware Slot Information 00:38:40.495 ========================= 00:38:40.495 Active slot: 0 00:38:40.495 00:38:40.495 00:38:40.495 Error Log 00:38:40.495 ========= 00:38:40.495 00:38:40.495 Active Namespaces 00:38:40.495 ================= 00:38:40.495 Discovery Log Page 00:38:40.495 ================== 00:38:40.495 Generation Counter: 2 00:38:40.495 Number of Records: 2 00:38:40.495 Record Format: 0 00:38:40.495 00:38:40.495 Discovery Log Entry 0 00:38:40.495 ---------------------- 00:38:40.495 Transport Type: 3 (TCP) 00:38:40.495 Address Family: 1 (IPv4) 00:38:40.495 Subsystem Type: 3 (Current Discovery Subsystem) 00:38:40.495 Entry Flags: 00:38:40.495 Duplicate Returned Information: 0 00:38:40.495 Explicit Persistent Connection Support for Discovery: 0 00:38:40.495 Transport Requirements: 00:38:40.495 Secure Channel: Not Specified 00:38:40.495 Port ID: 1 (0x0001) 00:38:40.495 Controller ID: 65535 (0xffff) 00:38:40.495 Admin Max SQ Size: 32 00:38:40.495 Transport Service Identifier: 4420 00:38:40.495 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:38:40.495 Transport Address: 10.0.0.1 00:38:40.495 Discovery Log Entry 1 00:38:40.495 ---------------------- 00:38:40.495 Transport Type: 3 (TCP) 00:38:40.495 Address Family: 1 (IPv4) 00:38:40.495 Subsystem Type: 2 (NVM Subsystem) 00:38:40.495 Entry Flags: 00:38:40.495 Duplicate Returned Information: 0 00:38:40.495 Explicit Persistent Connection Support for Discovery: 0 00:38:40.495 Transport Requirements: 00:38:40.495 Secure Channel: Not Specified 00:38:40.495 Port ID: 1 (0x0001) 00:38:40.495 Controller ID: 65535 (0xffff) 00:38:40.495 Admin Max SQ Size: 32 00:38:40.495 Transport Service Identifier: 4420 00:38:40.495 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:38:40.495 Transport Address: 10.0.0.1 00:38:40.495 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:40.757 get_feature(0x01) failed 00:38:40.757 get_feature(0x02) failed 00:38:40.757 get_feature(0x04) failed 00:38:40.757 ===================================================== 00:38:40.757 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:40.757 ===================================================== 00:38:40.757 Controller Capabilities/Features 00:38:40.757 ================================ 00:38:40.757 Vendor ID: 0000 00:38:40.757 Subsystem Vendor ID: 0000 00:38:40.757 Serial Number: b01f33c01d66f25c95f0 00:38:40.757 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:38:40.757 Firmware Version: 6.8.9-20 00:38:40.757 Recommended Arb Burst: 6 00:38:40.757 IEEE OUI Identifier: 00 00 00 00:38:40.757 Multi-path I/O 00:38:40.757 May have multiple subsystem ports: Yes 00:38:40.757 May have multiple controllers: Yes 00:38:40.757 Associated with SR-IOV VF: No 00:38:40.757 Max Data Transfer Size: Unlimited 00:38:40.757 Max Number of Namespaces: 1024 00:38:40.757 Max Number of I/O Queues: 128 00:38:40.757 NVMe Specification Version (VS): 1.3 00:38:40.757 NVMe Specification Version (Identify): 1.3 00:38:40.757 Maximum Queue Entries: 1024 00:38:40.757 Contiguous Queues Required: No 00:38:40.757 Arbitration Mechanisms Supported 00:38:40.757 Weighted Round Robin: Not Supported 00:38:40.757 Vendor Specific: Not Supported 00:38:40.757 Reset Timeout: 7500 ms 00:38:40.757 Doorbell Stride: 4 bytes 00:38:40.757 NVM Subsystem Reset: Not Supported 00:38:40.757 Command Sets Supported 00:38:40.757 NVM Command Set: Supported 00:38:40.757 Boot Partition: Not Supported 00:38:40.757 Memory Page Size Minimum: 4096 bytes 00:38:40.757 Memory Page Size Maximum: 4096 bytes 00:38:40.757 Persistent Memory Region: Not Supported 00:38:40.757 Optional Asynchronous Events Supported 00:38:40.757 Namespace Attribute Notices: Supported 00:38:40.757 Firmware Activation Notices: Not Supported 00:38:40.757 ANA Change Notices: Supported 00:38:40.757 PLE Aggregate Log Change Notices: Not Supported 00:38:40.757 LBA Status Info Alert Notices: Not Supported 00:38:40.757 EGE Aggregate Log Change Notices: Not Supported 00:38:40.757 Normal NVM Subsystem Shutdown event: Not Supported 00:38:40.757 Zone Descriptor Change Notices: Not Supported 00:38:40.757 Discovery Log Change Notices: Not Supported 00:38:40.757 Controller Attributes 00:38:40.757 128-bit Host Identifier: Supported 00:38:40.757 Non-Operational Permissive Mode: Not Supported 00:38:40.757 NVM Sets: Not Supported 00:38:40.757 Read Recovery Levels: Not Supported 00:38:40.757 Endurance Groups: Not Supported 00:38:40.757 Predictable Latency Mode: Not Supported 00:38:40.757 Traffic Based Keep ALive: Supported 00:38:40.757 Namespace Granularity: Not Supported 00:38:40.757 SQ Associations: Not Supported 00:38:40.757 UUID List: Not Supported 00:38:40.757 Multi-Domain Subsystem: Not Supported 00:38:40.757 Fixed Capacity Management: Not Supported 00:38:40.757 Variable Capacity Management: Not Supported 00:38:40.757 Delete Endurance Group: Not Supported 00:38:40.757 Delete NVM Set: Not Supported 00:38:40.757 Extended LBA Formats Supported: Not Supported 00:38:40.757 Flexible Data Placement Supported: Not Supported 00:38:40.757 00:38:40.757 Controller Memory Buffer Support 00:38:40.757 ================================ 00:38:40.757 Supported: No 00:38:40.757 00:38:40.757 Persistent Memory Region Support 00:38:40.757 ================================ 00:38:40.757 Supported: No 00:38:40.757 00:38:40.757 Admin Command Set Attributes 00:38:40.757 ============================ 00:38:40.757 Security Send/Receive: Not Supported 00:38:40.757 Format NVM: Not Supported 00:38:40.757 Firmware Activate/Download: Not Supported 00:38:40.757 Namespace Management: Not Supported 00:38:40.757 Device Self-Test: Not Supported 00:38:40.757 Directives: Not Supported 00:38:40.757 NVMe-MI: Not Supported 00:38:40.757 Virtualization Management: Not Supported 00:38:40.757 Doorbell Buffer Config: Not Supported 00:38:40.757 Get LBA Status Capability: Not Supported 00:38:40.757 Command & Feature Lockdown Capability: Not Supported 00:38:40.757 Abort Command Limit: 4 00:38:40.757 Async Event Request Limit: 4 00:38:40.757 Number of Firmware Slots: N/A 00:38:40.757 Firmware Slot 1 Read-Only: N/A 00:38:40.757 Firmware Activation Without Reset: N/A 00:38:40.757 Multiple Update Detection Support: N/A 00:38:40.757 Firmware Update Granularity: No Information Provided 00:38:40.757 Per-Namespace SMART Log: Yes 00:38:40.757 Asymmetric Namespace Access Log Page: Supported 00:38:40.757 ANA Transition Time : 10 sec 00:38:40.757 00:38:40.757 Asymmetric Namespace Access Capabilities 00:38:40.757 ANA Optimized State : Supported 00:38:40.757 ANA Non-Optimized State : Supported 00:38:40.757 ANA Inaccessible State : Supported 00:38:40.757 ANA Persistent Loss State : Supported 00:38:40.757 ANA Change State : Supported 00:38:40.757 ANAGRPID is not changed : No 00:38:40.757 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:38:40.757 00:38:40.757 ANA Group Identifier Maximum : 128 00:38:40.757 Number of ANA Group Identifiers : 128 00:38:40.757 Max Number of Allowed Namespaces : 1024 00:38:40.757 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:38:40.757 Command Effects Log Page: Supported 00:38:40.757 Get Log Page Extended Data: Supported 00:38:40.757 Telemetry Log Pages: Not Supported 00:38:40.757 Persistent Event Log Pages: Not Supported 00:38:40.757 Supported Log Pages Log Page: May Support 00:38:40.757 Commands Supported & Effects Log Page: Not Supported 00:38:40.757 Feature Identifiers & Effects Log Page:May Support 00:38:40.757 NVMe-MI Commands & Effects Log Page: May Support 00:38:40.757 Data Area 4 for Telemetry Log: Not Supported 00:38:40.757 Error Log Page Entries Supported: 128 00:38:40.757 Keep Alive: Supported 00:38:40.757 Keep Alive Granularity: 1000 ms 00:38:40.757 00:38:40.757 NVM Command Set Attributes 00:38:40.757 ========================== 00:38:40.757 Submission Queue Entry Size 00:38:40.757 Max: 64 00:38:40.757 Min: 64 00:38:40.757 Completion Queue Entry Size 00:38:40.758 Max: 16 00:38:40.758 Min: 16 00:38:40.758 Number of Namespaces: 1024 00:38:40.758 Compare Command: Not Supported 00:38:40.758 Write Uncorrectable Command: Not Supported 00:38:40.758 Dataset Management Command: Supported 00:38:40.758 Write Zeroes Command: Supported 00:38:40.758 Set Features Save Field: Not Supported 00:38:40.758 Reservations: Not Supported 00:38:40.758 Timestamp: Not Supported 00:38:40.758 Copy: Not Supported 00:38:40.758 Volatile Write Cache: Present 00:38:40.758 Atomic Write Unit (Normal): 1 00:38:40.758 Atomic Write Unit (PFail): 1 00:38:40.758 Atomic Compare & Write Unit: 1 00:38:40.758 Fused Compare & Write: Not Supported 00:38:40.758 Scatter-Gather List 00:38:40.758 SGL Command Set: Supported 00:38:40.758 SGL Keyed: Not Supported 00:38:40.758 SGL Bit Bucket Descriptor: Not Supported 00:38:40.758 SGL Metadata Pointer: Not Supported 00:38:40.758 Oversized SGL: Not Supported 00:38:40.758 SGL Metadata Address: Not Supported 00:38:40.758 SGL Offset: Supported 00:38:40.758 Transport SGL Data Block: Not Supported 00:38:40.758 Replay Protected Memory Block: Not Supported 00:38:40.758 00:38:40.758 Firmware Slot Information 00:38:40.758 ========================= 00:38:40.758 Active slot: 0 00:38:40.758 00:38:40.758 Asymmetric Namespace Access 00:38:40.758 =========================== 00:38:40.758 Change Count : 0 00:38:40.758 Number of ANA Group Descriptors : 1 00:38:40.758 ANA Group Descriptor : 0 00:38:40.758 ANA Group ID : 1 00:38:40.758 Number of NSID Values : 1 00:38:40.758 Change Count : 0 00:38:40.758 ANA State : 1 00:38:40.758 Namespace Identifier : 1 00:38:40.758 00:38:40.758 Commands Supported and Effects 00:38:40.758 ============================== 00:38:40.758 Admin Commands 00:38:40.758 -------------- 00:38:40.758 Get Log Page (02h): Supported 00:38:40.758 Identify (06h): Supported 00:38:40.758 Abort (08h): Supported 00:38:40.758 Set Features (09h): Supported 00:38:40.758 Get Features (0Ah): Supported 00:38:40.758 Asynchronous Event Request (0Ch): Supported 00:38:40.758 Keep Alive (18h): Supported 00:38:40.758 I/O Commands 00:38:40.758 ------------ 00:38:40.758 Flush (00h): Supported 00:38:40.758 Write (01h): Supported LBA-Change 00:38:40.758 Read (02h): Supported 00:38:40.758 Write Zeroes (08h): Supported LBA-Change 00:38:40.758 Dataset Management (09h): Supported 00:38:40.758 00:38:40.758 Error Log 00:38:40.758 ========= 00:38:40.758 Entry: 0 00:38:40.758 Error Count: 0x3 00:38:40.758 Submission Queue Id: 0x0 00:38:40.758 Command Id: 0x5 00:38:40.758 Phase Bit: 0 00:38:40.758 Status Code: 0x2 00:38:40.758 Status Code Type: 0x0 00:38:40.758 Do Not Retry: 1 00:38:40.758 Error Location: 0x28 00:38:40.758 LBA: 0x0 00:38:40.758 Namespace: 0x0 00:38:40.758 Vendor Log Page: 0x0 00:38:40.758 ----------- 00:38:40.758 Entry: 1 00:38:40.758 Error Count: 0x2 00:38:40.758 Submission Queue Id: 0x0 00:38:40.758 Command Id: 0x5 00:38:40.758 Phase Bit: 0 00:38:40.758 Status Code: 0x2 00:38:40.758 Status Code Type: 0x0 00:38:40.758 Do Not Retry: 1 00:38:40.758 Error Location: 0x28 00:38:40.758 LBA: 0x0 00:38:40.758 Namespace: 0x0 00:38:40.758 Vendor Log Page: 0x0 00:38:40.758 ----------- 00:38:40.758 Entry: 2 00:38:40.758 Error Count: 0x1 00:38:40.758 Submission Queue Id: 0x0 00:38:40.758 Command Id: 0x4 00:38:40.758 Phase Bit: 0 00:38:40.758 Status Code: 0x2 00:38:40.758 Status Code Type: 0x0 00:38:40.758 Do Not Retry: 1 00:38:40.758 Error Location: 0x28 00:38:40.758 LBA: 0x0 00:38:40.758 Namespace: 0x0 00:38:40.758 Vendor Log Page: 0x0 00:38:40.758 00:38:40.758 Number of Queues 00:38:40.758 ================ 00:38:40.758 Number of I/O Submission Queues: 128 00:38:40.758 Number of I/O Completion Queues: 128 00:38:40.758 00:38:40.758 ZNS Specific Controller Data 00:38:40.758 ============================ 00:38:40.758 Zone Append Size Limit: 0 00:38:40.758 00:38:40.758 00:38:40.758 Active Namespaces 00:38:40.758 ================= 00:38:40.758 get_feature(0x05) failed 00:38:40.758 Namespace ID:1 00:38:40.758 Command Set Identifier: NVM (00h) 00:38:40.758 Deallocate: Supported 00:38:40.758 Deallocated/Unwritten Error: Not Supported 00:38:40.758 Deallocated Read Value: Unknown 00:38:40.758 Deallocate in Write Zeroes: Not Supported 00:38:40.758 Deallocated Guard Field: 0xFFFF 00:38:40.758 Flush: Supported 00:38:40.758 Reservation: Not Supported 00:38:40.758 Namespace Sharing Capabilities: Multiple Controllers 00:38:40.758 Size (in LBAs): 3750748848 (1788GiB) 00:38:40.758 Capacity (in LBAs): 3750748848 (1788GiB) 00:38:40.758 Utilization (in LBAs): 3750748848 (1788GiB) 00:38:40.758 UUID: d86575c4-79e1-4117-95a7-63f53f99d040 00:38:40.758 Thin Provisioning: Not Supported 00:38:40.758 Per-NS Atomic Units: Yes 00:38:40.758 Atomic Write Unit (Normal): 8 00:38:40.758 Atomic Write Unit (PFail): 8 00:38:40.758 Preferred Write Granularity: 8 00:38:40.758 Atomic Compare & Write Unit: 8 00:38:40.758 Atomic Boundary Size (Normal): 0 00:38:40.758 Atomic Boundary Size (PFail): 0 00:38:40.758 Atomic Boundary Offset: 0 00:38:40.758 NGUID/EUI64 Never Reused: No 00:38:40.758 ANA group ID: 1 00:38:40.758 Namespace Write Protected: No 00:38:40.758 Number of LBA Formats: 1 00:38:40.758 Current LBA Format: LBA Format #00 00:38:40.758 LBA Format #00: Data Size: 512 Metadata Size: 0 00:38:40.758 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:40.758 rmmod nvme_tcp 00:38:40.758 rmmod nvme_fabrics 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:40.758 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:40.759 14:49:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:43.301 14:49:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:43.301 14:49:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:38:43.301 14:49:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:43.301 14:49:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:38:43.301 14:49:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:43.301 14:49:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:43.301 14:49:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:43.301 14:49:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:43.301 14:49:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:38:43.301 14:49:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:38:43.301 14:49:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:46.601 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:46.601 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:46.601 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:46.601 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:46.601 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:46.601 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:46.601 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:46.601 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:46.601 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:46.601 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:46.601 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:46.601 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:46.601 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:46.601 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:46.601 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:46.601 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:46.601 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:46.861 00:38:46.861 real 0m19.559s 00:38:46.861 user 0m5.273s 00:38:46.861 sys 0m11.339s 00:38:46.861 14:49:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:46.861 14:49:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:38:46.861 ************************************ 00:38:46.861 END TEST nvmf_identify_kernel_target 00:38:46.861 ************************************ 00:38:47.121 14:49:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:38:47.121 14:49:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:38:47.121 14:49:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:47.121 14:49:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:47.121 ************************************ 00:38:47.121 START TEST nvmf_auth_host 00:38:47.121 ************************************ 00:38:47.121 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:38:47.121 * Looking for test storage... 00:38:47.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:47.122 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:38:47.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.383 --rc genhtml_branch_coverage=1 00:38:47.383 --rc genhtml_function_coverage=1 00:38:47.383 --rc genhtml_legend=1 00:38:47.383 --rc geninfo_all_blocks=1 00:38:47.383 --rc geninfo_unexecuted_blocks=1 00:38:47.383 00:38:47.383 ' 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:38:47.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.383 --rc genhtml_branch_coverage=1 00:38:47.383 --rc genhtml_function_coverage=1 00:38:47.383 --rc genhtml_legend=1 00:38:47.383 --rc geninfo_all_blocks=1 00:38:47.383 --rc geninfo_unexecuted_blocks=1 00:38:47.383 00:38:47.383 ' 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:38:47.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.383 --rc genhtml_branch_coverage=1 00:38:47.383 --rc genhtml_function_coverage=1 00:38:47.383 --rc genhtml_legend=1 00:38:47.383 --rc geninfo_all_blocks=1 00:38:47.383 --rc geninfo_unexecuted_blocks=1 00:38:47.383 00:38:47.383 ' 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:38:47.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:47.383 --rc genhtml_branch_coverage=1 00:38:47.383 --rc genhtml_function_coverage=1 00:38:47.383 --rc genhtml_legend=1 00:38:47.383 --rc geninfo_all_blocks=1 00:38:47.383 --rc geninfo_unexecuted_blocks=1 00:38:47.383 00:38:47.383 ' 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.383 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:47.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:38:47.384 14:49:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.524 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:55.524 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:38:55.524 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:55.524 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:55.524 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:55.524 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:55.524 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:55.524 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:38:55.524 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:55.524 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:38:55.524 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:38:55.524 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:38:55.524 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:38:55.524 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:38:55.524 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:38:55.524 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:55.524 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:55.524 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:55.525 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:55.525 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:55.525 Found net devices under 0000:31:00.0: cvl_0_0 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:55.525 Found net devices under 0000:31:00.1: cvl_0_1 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:55.525 14:49:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:55.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:55.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:38:55.525 00:38:55.525 --- 10.0.0.2 ping statistics --- 00:38:55.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:55.525 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:55.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:55.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:38:55.525 00:38:55.525 --- 10.0.0.1 ping statistics --- 00:38:55.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:55.525 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=3269723 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 3269723 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3269723 ']' 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:55.525 14:49:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.525 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:55.525 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:38:55.525 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:55.525 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:55.525 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=0d473c006fb5dda6a5c065029081e179 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.GtY 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 0d473c006fb5dda6a5c065029081e179 0 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 0d473c006fb5dda6a5c065029081e179 0 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=0d473c006fb5dda6a5c065029081e179 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:38:55.526 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.GtY 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.GtY 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.GtY 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=a99088b7e20fdd60b60bd5a9f93b6b5c3b653caf9b51f58d5a5eb04a17f92bfd 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.fR1 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key a99088b7e20fdd60b60bd5a9f93b6b5c3b653caf9b51f58d5a5eb04a17f92bfd 3 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 a99088b7e20fdd60b60bd5a9f93b6b5c3b653caf9b51f58d5a5eb04a17f92bfd 3 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=a99088b7e20fdd60b60bd5a9f93b6b5c3b653caf9b51f58d5a5eb04a17f92bfd 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.fR1 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.fR1 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.fR1 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=2376a6db738f2229be81c92038ad37022e988c99f6ada904 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.X7Y 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 2376a6db738f2229be81c92038ad37022e988c99f6ada904 0 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 2376a6db738f2229be81c92038ad37022e988c99f6ada904 0 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=2376a6db738f2229be81c92038ad37022e988c99f6ada904 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.X7Y 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.X7Y 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.X7Y 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=a943818cea5294de7ecd37119987a30558b0ee0a5a1566c0 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.nwp 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key a943818cea5294de7ecd37119987a30558b0ee0a5a1566c0 2 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 a943818cea5294de7ecd37119987a30558b0ee0a5a1566c0 2 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=a943818cea5294de7ecd37119987a30558b0ee0a5a1566c0 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.nwp 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.nwp 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.nwp 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=62feb63549c6e93b37f6c31141290399 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.ylX 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 62feb63549c6e93b37f6c31141290399 1 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 62feb63549c6e93b37f6c31141290399 1 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=62feb63549c6e93b37f6c31141290399 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:38:55.787 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.ylX 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.ylX 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ylX 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b201221a8206f5c6a6a0f853b7892c5e 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.hhp 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b201221a8206f5c6a6a0f853b7892c5e 1 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b201221a8206f5c6a6a0f853b7892c5e 1 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b201221a8206f5c6a6a0f853b7892c5e 00:38:56.049 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.hhp 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.hhp 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.hhp 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=2736913caae82f1153b0a89e64a28c4f09ddb7264b098da9 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.MP7 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 2736913caae82f1153b0a89e64a28c4f09ddb7264b098da9 2 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 2736913caae82f1153b0a89e64a28c4f09ddb7264b098da9 2 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=2736913caae82f1153b0a89e64a28c4f09ddb7264b098da9 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.MP7 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.MP7 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.MP7 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=7ca588cec5ecf1168c48c280d5ca818e 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.FGT 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 7ca588cec5ecf1168c48c280d5ca818e 0 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 7ca588cec5ecf1168c48c280d5ca818e 0 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=7ca588cec5ecf1168c48c280d5ca818e 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.FGT 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.FGT 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.FGT 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=cc353fb28242d46f8b3f4ce50d0b5bf29cc9f535e673ae43a7698f90b64cd35d 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.C9y 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key cc353fb28242d46f8b3f4ce50d0b5bf29cc9f535e673ae43a7698f90b64cd35d 3 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 cc353fb28242d46f8b3f4ce50d0b5bf29cc9f535e673ae43a7698f90b64cd35d 3 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=cc353fb28242d46f8b3f4ce50d0b5bf29cc9f535e673ae43a7698f90b64cd35d 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:38:56.050 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:38:56.310 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.C9y 00:38:56.310 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.C9y 00:38:56.310 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.C9y 00:38:56.310 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:38:56.310 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3269723 00:38:56.310 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3269723 ']' 00:38:56.310 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:56.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GtY 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.fR1 ]] 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fR1 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.X7Y 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.nwp ]] 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nwp 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ylX 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.hhp ]] 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hhp 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.MP7 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.311 14:49:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.311 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.311 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.FGT ]] 00:38:56.311 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.FGT 00:38:56.311 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.311 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.311 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.311 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:38:56.311 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.C9y 00:38:56.311 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.311 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:38:56.570 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:56.570 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:38:56.570 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:38:56.570 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:38:56.570 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:38:56.570 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:38:56.570 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:38:56.570 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:56.570 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:56.570 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:38:56.570 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:56.570 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:38:56.570 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:38:56.570 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:38:56.570 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:38:56.570 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:38:56.570 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:38:56.571 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:38:56.571 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:38:56.571 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:56.571 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:38:56.571 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:38:56.571 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:38:56.571 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:56.571 14:49:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:59.865 Waiting for block devices as requested 00:38:59.865 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:59.865 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:59.865 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:00.125 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:00.125 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:00.125 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:00.398 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:00.398 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:00.398 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:00.658 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:00.658 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:00.918 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:00.918 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:00.918 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:00.918 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:01.178 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:01.178 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:02.118 No valid GPT data, bailing 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:02.118 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:39:02.379 00:39:02.379 Discovery Log Number of Records 2, Generation counter 2 00:39:02.379 =====Discovery Log Entry 0====== 00:39:02.379 trtype: tcp 00:39:02.379 adrfam: ipv4 00:39:02.379 subtype: current discovery subsystem 00:39:02.379 treq: not specified, sq flow control disable supported 00:39:02.379 portid: 1 00:39:02.379 trsvcid: 4420 00:39:02.379 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:02.379 traddr: 10.0.0.1 00:39:02.379 eflags: none 00:39:02.379 sectype: none 00:39:02.379 =====Discovery Log Entry 1====== 00:39:02.379 trtype: tcp 00:39:02.379 adrfam: ipv4 00:39:02.379 subtype: nvme subsystem 00:39:02.379 treq: not specified, sq flow control disable supported 00:39:02.379 portid: 1 00:39:02.379 trsvcid: 4420 00:39:02.379 subnqn: nqn.2024-02.io.spdk:cnode0 00:39:02.379 traddr: 10.0.0.1 00:39:02.379 eflags: none 00:39:02.379 sectype: none 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: ]] 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:39:02.379 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.380 14:49:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.380 nvme0n1 00:39:02.380 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.380 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:02.380 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:02.380 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.380 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.380 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: ]] 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.641 nvme0n1 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.641 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.902 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:02.902 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:02.902 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.902 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.902 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.902 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: ]] 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.903 nvme0n1 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: ]] 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.903 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.165 nvme0n1 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: ]] 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.165 14:49:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.426 nvme0n1 00:39:03.426 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.426 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.427 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.688 nvme0n1 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: ]] 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.688 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.949 nvme0n1 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: ]] 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.949 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.210 nvme0n1 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: ]] 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.210 14:49:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.471 nvme0n1 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: ]] 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.471 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.732 nvme0n1 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.732 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.993 nvme0n1 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.993 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.254 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.254 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:05.254 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:05.254 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:39:05.254 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:05.254 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:05.254 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:05.254 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:05.254 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:05.254 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:05.254 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:05.254 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:05.254 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:05.254 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: ]] 00:39:05.254 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.255 14:49:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.516 nvme0n1 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: ]] 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.516 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.778 nvme0n1 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: ]] 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.778 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.039 nvme0n1 00:39:06.039 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.039 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:06.039 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:06.039 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.039 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: ]] 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.300 14:49:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.561 nvme0n1 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:06.561 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.562 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.823 nvme0n1 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: ]] 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.823 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.083 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.083 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:07.083 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:07.083 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:07.083 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:07.083 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:07.083 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:07.083 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:07.083 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:07.083 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:07.083 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:07.083 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:07.083 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:07.083 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.083 14:49:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.343 nvme0n1 00:39:07.343 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.343 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:07.343 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:07.343 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.343 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.343 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.603 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:07.603 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:07.603 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.603 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.603 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.603 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:07.603 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:39:07.603 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:07.603 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:07.603 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:07.603 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:07.603 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:07.603 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:07.603 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:07.603 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:07.603 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:07.603 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: ]] 00:39:07.603 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.604 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:07.863 nvme0n1 00:39:07.863 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:07.863 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:07.864 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:07.864 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:07.864 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: ]] 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.128 14:49:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.698 nvme0n1 00:39:08.698 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.698 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:08.698 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:08.698 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.698 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.698 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.698 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:08.698 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: ]] 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.699 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:08.959 nvme0n1 00:39:08.959 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.959 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:08.959 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:08.959 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.959 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:39:09.218 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.219 14:49:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.788 nvme0n1 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: ]] 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.788 14:49:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.359 nvme0n1 00:39:10.359 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.359 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:10.359 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:10.359 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.359 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.359 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.619 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:10.619 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:10.619 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.619 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.619 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.619 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:10.619 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:39:10.619 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:10.619 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:10.619 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:10.619 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:10.619 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:10.619 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:10.619 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:10.619 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:10.619 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: ]] 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.620 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.190 nvme0n1 00:39:11.190 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.190 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:11.503 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: ]] 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.504 14:49:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:12.167 nvme0n1 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: ]] 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:12.167 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:12.168 14:49:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:13.179 nvme0n1 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.179 14:49:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:13.750 nvme0n1 00:39:13.750 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.750 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:13.750 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:13.750 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.750 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:13.750 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.750 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:13.750 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:13.750 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.750 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: ]] 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.010 nvme0n1 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:14.010 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:14.011 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:14.011 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:14.011 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:14.011 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:14.011 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:14.011 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: ]] 00:39:14.011 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:14.011 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:39:14.011 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:14.011 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:14.011 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:14.011 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:14.011 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:14.011 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:14.011 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.011 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.271 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.271 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:14.271 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:14.271 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:14.271 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:14.271 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:14.271 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:14.271 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:14.271 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:14.271 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:14.271 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:14.271 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:14.271 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.272 nvme0n1 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: ]] 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.272 14:49:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.533 nvme0n1 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: ]] 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.533 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.794 nvme0n1 00:39:14.794 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.794 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:14.794 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.795 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.056 nvme0n1 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: ]] 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.056 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.317 nvme0n1 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: ]] 00:39:15.317 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.318 14:49:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.579 nvme0n1 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: ]] 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.579 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.840 nvme0n1 00:39:15.840 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.840 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:15.840 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:15.840 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.840 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.840 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.840 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:15.840 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:15.840 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.840 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.840 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.840 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:15.840 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:39:15.840 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:15.840 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:15.840 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:15.840 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:15.840 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: ]] 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.841 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.101 nvme0n1 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:16.101 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:16.102 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:16.102 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:16.102 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:16.102 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:16.102 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:16.102 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:16.102 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:16.102 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:16.102 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:16.102 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:16.102 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:16.102 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.362 nvme0n1 00:39:16.362 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:16.362 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:16.362 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:16.362 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:16.362 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.362 14:49:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:16.362 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:16.362 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:16.362 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:16.362 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.362 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:16.362 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:16.362 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:16.362 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: ]] 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:16.363 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.939 nvme0n1 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: ]] 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:16.939 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.200 nvme0n1 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: ]] 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:17.200 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:17.201 14:49:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.461 nvme0n1 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: ]] 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:17.461 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.031 nvme0n1 00:39:18.031 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.031 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:18.031 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:18.031 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.031 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.031 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.031 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:18.031 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.032 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.292 nvme0n1 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: ]] 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:18.292 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:18.293 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:18.293 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:18.293 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:18.293 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:18.293 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:18.293 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:18.293 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.293 14:49:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.862 nvme0n1 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: ]] 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.862 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.434 nvme0n1 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: ]] 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:19.434 14:49:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:19.434 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:19.434 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:19.434 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:19.434 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:19.434 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:19.434 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:19.434 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:19.434 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:19.434 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:19.434 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:19.434 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:19.434 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:19.434 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:19.434 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:19.434 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.005 nvme0n1 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: ]] 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.005 14:49:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.576 nvme0n1 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.576 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.837 nvme0n1 00:39:20.837 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.837 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:20.837 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:20.837 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.837 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:20.837 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: ]] 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.097 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:21.098 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:21.098 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:21.098 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:21.098 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:21.098 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:21.098 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:21.098 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:21.098 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:21.098 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:21.098 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:21.098 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:21.098 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.098 14:49:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.668 nvme0n1 00:39:21.668 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.668 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:21.668 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:21.668 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.668 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.668 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: ]] 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.929 14:49:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.502 nvme0n1 00:39:22.502 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:22.502 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:22.502 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:22.502 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:22.502 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.502 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: ]] 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:22.762 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:22.763 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:22.763 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:22.763 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:22.763 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:22.763 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:22.763 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:22.763 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:22.763 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:22.763 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:22.763 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:22.763 14:49:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.335 nvme0n1 00:39:23.335 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:23.335 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:23.335 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:23.335 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:23.335 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.335 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: ]] 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:23.595 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.165 nvme0n1 00:39:24.165 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.165 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:24.165 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:24.165 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.165 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.427 14:49:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.375 nvme0n1 00:39:25.375 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.375 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:25.375 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.375 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.375 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:25.375 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.375 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:25.375 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:25.375 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.375 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: ]] 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.376 nvme0n1 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.376 14:49:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: ]] 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.376 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.636 nvme0n1 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: ]] 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.637 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.897 nvme0n1 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: ]] 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.897 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.157 nvme0n1 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.157 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.417 nvme0n1 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: ]] 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:26.417 14:49:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:26.417 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.417 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.677 nvme0n1 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: ]] 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.677 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.936 nvme0n1 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: ]] 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.936 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.196 nvme0n1 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: ]] 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.196 14:49:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.457 nvme0n1 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.457 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.458 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.458 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:27.458 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:27.458 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:27.458 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:27.458 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:27.458 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:27.458 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:27.458 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:27.458 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:27.458 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:27.458 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:27.458 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:27.458 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.458 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.719 nvme0n1 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: ]] 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.719 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.979 nvme0n1 00:39:27.979 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.979 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:27.979 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:27.979 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.979 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:27.979 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: ]] 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.240 14:49:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.500 nvme0n1 00:39:28.500 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.500 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:28.500 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:28.500 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.500 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.500 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.500 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:28.500 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:28.500 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.500 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.500 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.500 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:28.500 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:39:28.500 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:28.500 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:28.500 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:28.500 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: ]] 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.501 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.761 nvme0n1 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: ]] 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:28.761 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.021 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.021 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:29.021 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:29.021 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:29.021 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:29.021 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:29.021 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:29.021 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:29.021 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:29.021 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:29.021 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:29.021 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:29.021 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:29.021 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.021 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.281 nvme0n1 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.281 14:49:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.541 nvme0n1 00:39:29.541 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.541 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: ]] 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:29.542 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.112 nvme0n1 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: ]] 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.112 14:49:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.698 nvme0n1 00:39:30.698 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: ]] 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:30.699 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.267 nvme0n1 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: ]] 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.267 14:49:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.836 nvme0n1 00:39:31.836 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.836 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:31.836 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:31.836 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.836 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.836 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.836 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:31.836 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:31.836 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.837 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.407 nvme0n1 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGQ0NzNjMDA2ZmI1ZGRhNmE1YzA2NTAyOTA4MWUxNzkfh//m: 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: ]] 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTk5MDg4YjdlMjBmZGQ2MGI2MGJkNWE5ZjkzYjZiNWMzYjY1M2NhZjliNTFmNThkNWE1ZWIwNGExN2Y5MmJmZAuka4A=: 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:32.407 14:49:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.347 nvme0n1 00:39:33.347 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.347 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:33.347 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: ]] 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.348 14:49:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.917 nvme0n1 00:39:33.917 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.917 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:33.917 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:33.917 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.917 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:33.917 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.917 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:33.917 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:33.917 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.917 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.177 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:34.177 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:34.177 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:39:34.177 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:34.177 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:34.177 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:34.177 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:34.177 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:34.177 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:34.177 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:34.177 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:34.177 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:34.177 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: ]] 00:39:34.177 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:34.177 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:39:34.177 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:34.177 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:34.178 14:49:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.747 nvme0n1 00:39:34.747 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:34.747 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:34.747 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:34.747 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:34.747 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:34.747 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:34.747 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:34.747 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:34.747 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:34.747 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.007 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.007 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:35.007 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:39:35.007 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:35.007 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:35.007 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:35.007 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:39:35.007 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:35.007 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:35.007 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:35.007 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:35.007 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjczNjkxM2NhYWU4MmYxMTUzYjBhODllNjRhMjhjNGYwOWRkYjcyNjRiMDk4ZGE5OGPoTA==: 00:39:35.007 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: ]] 00:39:35.007 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NhNTg4Y2VjNWVjZjExNjhjNDhjMjgwZDVjYTgxOGVQO1Jo: 00:39:35.007 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:39:35.007 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.008 14:49:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.578 nvme0n1 00:39:35.578 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.578 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:35.578 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:35.578 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.578 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.578 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.578 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:35.578 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:35.578 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.578 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.838 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.838 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:39:35.838 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:39:35.838 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:35.838 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:39:35.838 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2MzNTNmYjI4MjQyZDQ2ZjhiM2Y0Y2U1MGQwYjViZjI5Y2M5ZjUzNWU2NzNhZTQzYTc2OThmOTBiNjRjZDM1ZND6bLg=: 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.839 14:49:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.409 nvme0n1 00:39:36.409 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:36.409 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:39:36.409 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:39:36.409 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.409 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.409 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:36.409 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: ]] 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.669 request: 00:39:36.669 { 00:39:36.669 "name": "nvme0", 00:39:36.669 "trtype": "tcp", 00:39:36.669 "traddr": "10.0.0.1", 00:39:36.669 "adrfam": "ipv4", 00:39:36.669 "trsvcid": "4420", 00:39:36.669 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:39:36.669 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:39:36.669 "prchk_reftag": false, 00:39:36.669 "prchk_guard": false, 00:39:36.669 "hdgst": false, 00:39:36.669 "ddgst": false, 00:39:36.669 "allow_unrecognized_csi": false, 00:39:36.669 "method": "bdev_nvme_attach_controller", 00:39:36.669 "req_id": 1 00:39:36.669 } 00:39:36.669 Got JSON-RPC error response 00:39:36.669 response: 00:39:36.669 { 00:39:36.669 "code": -5, 00:39:36.669 "message": "Input/output error" 00:39:36.669 } 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:36.669 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.670 request: 00:39:36.670 { 00:39:36.670 "name": "nvme0", 00:39:36.670 "trtype": "tcp", 00:39:36.670 "traddr": "10.0.0.1", 00:39:36.670 "adrfam": "ipv4", 00:39:36.670 "trsvcid": "4420", 00:39:36.670 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:39:36.670 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:39:36.670 "prchk_reftag": false, 00:39:36.670 "prchk_guard": false, 00:39:36.670 "hdgst": false, 00:39:36.670 "ddgst": false, 00:39:36.670 "dhchap_key": "key2", 00:39:36.670 "allow_unrecognized_csi": false, 00:39:36.670 "method": "bdev_nvme_attach_controller", 00:39:36.670 "req_id": 1 00:39:36.670 } 00:39:36.670 Got JSON-RPC error response 00:39:36.670 response: 00:39:36.670 { 00:39:36.670 "code": -5, 00:39:36.670 "message": "Input/output error" 00:39:36.670 } 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.670 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.931 request: 00:39:36.931 { 00:39:36.931 "name": "nvme0", 00:39:36.931 "trtype": "tcp", 00:39:36.931 "traddr": "10.0.0.1", 00:39:36.931 "adrfam": "ipv4", 00:39:36.931 "trsvcid": "4420", 00:39:36.931 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:39:36.931 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:39:36.931 "prchk_reftag": false, 00:39:36.931 "prchk_guard": false, 00:39:36.931 "hdgst": false, 00:39:36.931 "ddgst": false, 00:39:36.931 "dhchap_key": "key1", 00:39:36.931 "dhchap_ctrlr_key": "ckey2", 00:39:36.931 "allow_unrecognized_csi": false, 00:39:36.931 "method": "bdev_nvme_attach_controller", 00:39:36.931 "req_id": 1 00:39:36.931 } 00:39:36.931 Got JSON-RPC error response 00:39:36.931 response: 00:39:36.931 { 00:39:36.931 "code": -5, 00:39:36.931 "message": "Input/output error" 00:39:36.931 } 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:36.931 nvme0n1 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: ]] 00:39:36.931 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.191 request: 00:39:37.191 { 00:39:37.191 "name": "nvme0", 00:39:37.191 "dhchap_key": "key1", 00:39:37.191 "dhchap_ctrlr_key": "ckey2", 00:39:37.191 "method": "bdev_nvme_set_keys", 00:39:37.191 "req_id": 1 00:39:37.191 } 00:39:37.191 Got JSON-RPC error response 00:39:37.191 response: 00:39:37.191 { 00:39:37.191 "code": -13, 00:39:37.191 "message": "Permission denied" 00:39:37.191 } 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:39:37.191 14:50:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:39:38.573 14:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:39:38.573 14:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:39:38.573 14:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:38.573 14:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:38.573 14:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:38.573 14:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:39:38.573 14:50:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:39:39.513 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:39:39.513 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:39:39.513 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:39.513 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.513 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:39.513 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:39:39.513 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:39:39.513 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:39.513 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:39.513 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:39.513 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:39:39.513 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:39.513 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:39.513 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:39.513 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:39.513 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjM3NmE2ZGI3MzhmMjIyOWJlODFjOTIwMzhhZDM3MDIyZTk4OGM5OWY2YWRhOTA0/o8xgg==: 00:39:39.513 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: ]] 00:39:39.513 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTk0MzgxOGNlYTUyOTRkZTdlY2QzNzExOTk4N2EzMDU1OGIwZWUwYTVhMTU2NmMwB3ddxg==: 00:39:39.514 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:39:39.514 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:39:39.514 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:39:39.514 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:39:39.514 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:39.514 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:39.514 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:39:39.514 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:39.514 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:39:39.514 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:39:39.514 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:39:39.514 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:39:39.514 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:39.514 14:50:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.514 nvme0n1 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjJmZWI2MzU0OWM2ZTkzYjM3ZjZjMzExNDEyOTAzOTkf1fKR: 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: ]] 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjIwMTIyMWE4MjA2ZjVjNmE2YTBmODUzYjc4OTJjNWXW3j1X: 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.514 request: 00:39:39.514 { 00:39:39.514 "name": "nvme0", 00:39:39.514 "dhchap_key": "key2", 00:39:39.514 "dhchap_ctrlr_key": "ckey1", 00:39:39.514 "method": "bdev_nvme_set_keys", 00:39:39.514 "req_id": 1 00:39:39.514 } 00:39:39.514 Got JSON-RPC error response 00:39:39.514 response: 00:39:39.514 { 00:39:39.514 "code": -13, 00:39:39.514 "message": "Permission denied" 00:39:39.514 } 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:39.514 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:39.774 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:39:39.774 14:50:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:40.712 rmmod nvme_tcp 00:39:40.712 rmmod nvme_fabrics 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 3269723 ']' 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 3269723 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3269723 ']' 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3269723 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3269723 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3269723' 00:39:40.712 killing process with pid 3269723 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3269723 00:39:40.712 14:50:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3269723 00:39:41.651 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:41.651 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:41.651 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:41.651 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:39:41.651 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:39:41.651 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:41.651 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:39:41.651 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:41.651 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:41.651 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:41.651 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:41.651 14:50:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:43.563 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:43.563 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:39:43.563 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:39:43.563 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:39:43.563 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:39:43.563 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:39:43.563 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:39:43.563 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:39:43.563 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:43.563 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:39:43.563 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:39:43.563 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:39:43.823 14:50:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:47.121 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:47.121 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:47.121 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:47.121 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:47.121 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:47.121 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:47.121 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:47.121 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:47.121 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:47.380 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:47.380 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:47.380 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:47.380 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:47.380 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:47.380 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:47.380 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:47.380 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:47.640 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.GtY /tmp/spdk.key-null.X7Y /tmp/spdk.key-sha256.ylX /tmp/spdk.key-sha384.MP7 /tmp/spdk.key-sha512.C9y /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:39:47.640 14:50:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:51.120 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:39:51.120 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:39:51.120 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:39:51.120 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:39:51.120 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:39:51.120 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:39:51.120 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:39:51.120 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:39:51.120 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:39:51.120 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:39:51.120 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:39:51.120 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:39:51.120 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:39:51.120 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:39:51.120 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:39:51.120 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:39:51.120 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:39:51.120 00:39:51.120 real 1m4.173s 00:39:51.120 user 0m57.814s 00:39:51.120 sys 0m15.907s 00:39:51.120 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:51.120 14:50:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.120 ************************************ 00:39:51.121 END TEST nvmf_auth_host 00:39:51.121 ************************************ 00:39:51.381 14:50:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:39:51.381 14:50:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:39:51.381 14:50:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:51.381 14:50:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:51.381 14:50:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:39:51.381 ************************************ 00:39:51.381 START TEST nvmf_digest 00:39:51.381 ************************************ 00:39:51.381 14:50:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:39:51.381 * Looking for test storage... 00:39:51.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:39:51.381 14:50:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:39:51.381 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:39:51.643 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:39:51.643 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:39:51.643 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:51.643 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:39:51.643 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:39:51.643 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:51.643 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:51.643 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:39:51.643 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:51.643 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:51.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.643 --rc genhtml_branch_coverage=1 00:39:51.643 --rc genhtml_function_coverage=1 00:39:51.643 --rc genhtml_legend=1 00:39:51.643 --rc geninfo_all_blocks=1 00:39:51.643 --rc geninfo_unexecuted_blocks=1 00:39:51.643 00:39:51.643 ' 00:39:51.643 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:51.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.644 --rc genhtml_branch_coverage=1 00:39:51.644 --rc genhtml_function_coverage=1 00:39:51.644 --rc genhtml_legend=1 00:39:51.644 --rc geninfo_all_blocks=1 00:39:51.644 --rc geninfo_unexecuted_blocks=1 00:39:51.644 00:39:51.644 ' 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:51.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.644 --rc genhtml_branch_coverage=1 00:39:51.644 --rc genhtml_function_coverage=1 00:39:51.644 --rc genhtml_legend=1 00:39:51.644 --rc geninfo_all_blocks=1 00:39:51.644 --rc geninfo_unexecuted_blocks=1 00:39:51.644 00:39:51.644 ' 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:51.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.644 --rc genhtml_branch_coverage=1 00:39:51.644 --rc genhtml_function_coverage=1 00:39:51.644 --rc genhtml_legend=1 00:39:51.644 --rc geninfo_all_blocks=1 00:39:51.644 --rc geninfo_unexecuted_blocks=1 00:39:51.644 00:39:51.644 ' 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:51.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:51.644 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:51.645 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:39:51.645 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:39:51.645 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:39:51.645 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:39:51.645 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:51.645 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:51.645 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:51.645 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:51.645 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:51.645 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:51.645 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:51.645 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:51.645 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:51.645 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:51.645 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:39:51.645 14:50:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:59.775 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:59.775 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:59.775 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:59.776 Found net devices under 0000:31:00.0: cvl_0_0 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:59.776 Found net devices under 0000:31:00.1: cvl_0_1 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:59.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:59.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:39:59.776 00:39:59.776 --- 10.0.0.2 ping statistics --- 00:39:59.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:59.776 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:59.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:59.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:39:59.776 00:39:59.776 --- 10.0.0.1 ping statistics --- 00:39:59.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:59.776 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:39:59.776 ************************************ 00:39:59.776 START TEST nvmf_digest_clean 00:39:59.776 ************************************ 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=3287250 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 3287250 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3287250 ']' 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:59.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:59.776 14:50:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:59.776 [2024-10-07 14:50:22.609593] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:39:59.776 [2024-10-07 14:50:22.609698] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:59.776 [2024-10-07 14:50:22.733712] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:59.776 [2024-10-07 14:50:22.912743] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:59.776 [2024-10-07 14:50:22.912790] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:59.776 [2024-10-07 14:50:22.912802] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:59.776 [2024-10-07 14:50:22.912813] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:59.776 [2024-10-07 14:50:22.912822] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:59.776 [2024-10-07 14:50:22.914039] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:39:59.776 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:59.776 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:39:59.776 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:59.776 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:59.776 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:39:59.776 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:59.776 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:39:59.776 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:39:59.776 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:39:59.776 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.776 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:00.036 null0 00:40:00.036 [2024-10-07 14:50:23.671314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:00.036 [2024-10-07 14:50:23.695570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:00.036 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:00.036 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:40:00.036 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:40:00.036 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:40:00.036 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:40:00.036 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:40:00.036 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:40:00.036 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:40:00.036 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3287578 00:40:00.036 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3287578 /var/tmp/bperf.sock 00:40:00.036 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3287578 ']' 00:40:00.036 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:40:00.036 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:00.036 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:00.036 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:00.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:00.036 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:00.036 14:50:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:00.297 [2024-10-07 14:50:23.779933] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:40:00.297 [2024-10-07 14:50:23.780047] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287578 ] 00:40:00.297 [2024-10-07 14:50:23.913386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:00.557 [2024-10-07 14:50:24.090663] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:01.127 14:50:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:01.127 14:50:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:40:01.127 14:50:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:40:01.127 14:50:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:40:01.127 14:50:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:01.387 14:50:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:01.387 14:50:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:01.647 nvme0n1 00:40:01.647 14:50:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:40:01.647 14:50:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:01.908 Running I/O for 2 seconds... 00:40:03.791 17452.00 IOPS, 68.17 MiB/s [2024-10-07T12:50:27.500Z] 17632.00 IOPS, 68.88 MiB/s 00:40:03.791 Latency(us) 00:40:03.791 [2024-10-07T12:50:27.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:03.791 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:03.791 nvme0n1 : 2.04 17316.04 67.64 0.00 0.00 7239.60 3345.07 44127.57 00:40:03.791 [2024-10-07T12:50:27.500Z] =================================================================================================================== 00:40:03.791 [2024-10-07T12:50:27.500Z] Total : 17316.04 67.64 0.00 0.00 7239.60 3345.07 44127.57 00:40:03.791 { 00:40:03.791 "results": [ 00:40:03.791 { 00:40:03.791 "job": "nvme0n1", 00:40:03.791 "core_mask": "0x2", 00:40:03.791 "workload": "randread", 00:40:03.791 "status": "finished", 00:40:03.791 "queue_depth": 128, 00:40:03.791 "io_size": 4096, 00:40:03.791 "runtime": 2.043885, 00:40:03.791 "iops": 17316.042732345508, 00:40:03.791 "mibps": 67.64079192322464, 00:40:03.791 "io_failed": 0, 00:40:03.791 "io_timeout": 0, 00:40:03.791 "avg_latency_us": 7239.600723327306, 00:40:03.791 "min_latency_us": 3345.0666666666666, 00:40:03.791 "max_latency_us": 44127.573333333334 00:40:03.791 } 00:40:03.791 ], 00:40:03.791 "core_count": 1 00:40:03.792 } 00:40:03.792 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:40:03.792 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:40:03.792 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:40:03.792 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:40:03.792 | select(.opcode=="crc32c") 00:40:03.792 | "\(.module_name) \(.executed)"' 00:40:03.792 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:40:04.051 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:40:04.051 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:40:04.051 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:40:04.051 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:40:04.051 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3287578 00:40:04.051 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3287578 ']' 00:40:04.051 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3287578 00:40:04.051 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:40:04.051 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:04.051 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3287578 00:40:04.051 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:04.051 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:04.052 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3287578' 00:40:04.052 killing process with pid 3287578 00:40:04.052 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3287578 00:40:04.052 Received shutdown signal, test time was about 2.000000 seconds 00:40:04.052 00:40:04.052 Latency(us) 00:40:04.052 [2024-10-07T12:50:27.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:04.052 [2024-10-07T12:50:27.761Z] =================================================================================================================== 00:40:04.052 [2024-10-07T12:50:27.761Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:04.052 14:50:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3287578 00:40:04.622 14:50:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:40:04.622 14:50:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:40:04.622 14:50:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:40:04.622 14:50:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:40:04.622 14:50:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:40:04.622 14:50:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:40:04.622 14:50:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:40:04.622 14:50:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3288405 00:40:04.622 14:50:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3288405 /var/tmp/bperf.sock 00:40:04.622 14:50:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3288405 ']' 00:40:04.622 14:50:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:40:04.622 14:50:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:04.622 14:50:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:04.622 14:50:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:04.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:04.623 14:50:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:04.623 14:50:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:04.623 [2024-10-07 14:50:28.303827] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:40:04.623 [2024-10-07 14:50:28.303934] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288405 ] 00:40:04.623 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:04.623 Zero copy mechanism will not be used. 00:40:04.882 [2024-10-07 14:50:28.429475] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:04.882 [2024-10-07 14:50:28.566071] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:05.452 14:50:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:05.452 14:50:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:40:05.452 14:50:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:40:05.452 14:50:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:40:05.452 14:50:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:06.023 14:50:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:06.023 14:50:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:06.023 nvme0n1 00:40:06.023 14:50:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:40:06.023 14:50:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:06.284 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:06.284 Zero copy mechanism will not be used. 00:40:06.284 Running I/O for 2 seconds... 00:40:08.163 5816.00 IOPS, 727.00 MiB/s [2024-10-07T12:50:31.872Z] 5653.50 IOPS, 706.69 MiB/s 00:40:08.163 Latency(us) 00:40:08.163 [2024-10-07T12:50:31.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:08.163 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:40:08.163 nvme0n1 : 2.00 5657.67 707.21 0.00 0.00 2824.74 525.65 6935.89 00:40:08.163 [2024-10-07T12:50:31.872Z] =================================================================================================================== 00:40:08.163 [2024-10-07T12:50:31.872Z] Total : 5657.67 707.21 0.00 0.00 2824.74 525.65 6935.89 00:40:08.163 { 00:40:08.163 "results": [ 00:40:08.163 { 00:40:08.163 "job": "nvme0n1", 00:40:08.163 "core_mask": "0x2", 00:40:08.163 "workload": "randread", 00:40:08.163 "status": "finished", 00:40:08.163 "queue_depth": 16, 00:40:08.163 "io_size": 131072, 00:40:08.163 "runtime": 2.003297, 00:40:08.163 "iops": 5657.6733255228755, 00:40:08.163 "mibps": 707.2091656903594, 00:40:08.163 "io_failed": 0, 00:40:08.163 "io_timeout": 0, 00:40:08.163 "avg_latency_us": 2824.744596200224, 00:40:08.163 "min_latency_us": 525.6533333333333, 00:40:08.163 "max_latency_us": 6935.893333333333 00:40:08.163 } 00:40:08.163 ], 00:40:08.163 "core_count": 1 00:40:08.163 } 00:40:08.163 14:50:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:40:08.163 14:50:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:40:08.163 14:50:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:40:08.163 14:50:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:40:08.163 | select(.opcode=="crc32c") 00:40:08.163 | "\(.module_name) \(.executed)"' 00:40:08.163 14:50:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:40:08.423 14:50:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:40:08.423 14:50:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:40:08.423 14:50:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:40:08.423 14:50:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:40:08.423 14:50:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3288405 00:40:08.423 14:50:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3288405 ']' 00:40:08.423 14:50:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3288405 00:40:08.423 14:50:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:40:08.423 14:50:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:08.423 14:50:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3288405 00:40:08.423 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:08.423 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:08.423 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3288405' 00:40:08.423 killing process with pid 3288405 00:40:08.423 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3288405 00:40:08.423 Received shutdown signal, test time was about 2.000000 seconds 00:40:08.423 00:40:08.423 Latency(us) 00:40:08.423 [2024-10-07T12:50:32.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:08.423 [2024-10-07T12:50:32.132Z] =================================================================================================================== 00:40:08.423 [2024-10-07T12:50:32.132Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:08.423 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3288405 00:40:08.993 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:40:08.993 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:40:08.993 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:40:08.993 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:40:08.993 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:40:08.993 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:40:08.993 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:40:08.993 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3289277 00:40:08.993 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3289277 /var/tmp/bperf.sock 00:40:08.993 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3289277 ']' 00:40:08.993 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:40:08.993 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:08.993 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:08.993 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:08.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:08.993 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:08.993 14:50:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:08.993 [2024-10-07 14:50:32.656271] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:40:08.993 [2024-10-07 14:50:32.656378] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289277 ] 00:40:09.252 [2024-10-07 14:50:32.781822] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:09.252 [2024-10-07 14:50:32.918298] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:09.822 14:50:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:09.822 14:50:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:40:09.822 14:50:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:40:09.822 14:50:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:40:09.822 14:50:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:10.082 14:50:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:10.082 14:50:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:10.343 nvme0n1 00:40:10.343 14:50:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:40:10.343 14:50:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:10.603 Running I/O for 2 seconds... 00:40:12.483 19570.00 IOPS, 76.45 MiB/s [2024-10-07T12:50:36.192Z] 19622.50 IOPS, 76.65 MiB/s 00:40:12.483 Latency(us) 00:40:12.483 [2024-10-07T12:50:36.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:12.483 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:12.483 nvme0n1 : 2.01 19619.69 76.64 0.00 0.00 6515.47 2498.56 11741.87 00:40:12.483 [2024-10-07T12:50:36.192Z] =================================================================================================================== 00:40:12.483 [2024-10-07T12:50:36.192Z] Total : 19619.69 76.64 0.00 0.00 6515.47 2498.56 11741.87 00:40:12.483 { 00:40:12.483 "results": [ 00:40:12.483 { 00:40:12.483 "job": "nvme0n1", 00:40:12.483 "core_mask": "0x2", 00:40:12.483 "workload": "randwrite", 00:40:12.483 "status": "finished", 00:40:12.483 "queue_depth": 128, 00:40:12.483 "io_size": 4096, 00:40:12.483 "runtime": 2.006811, 00:40:12.483 "iops": 19619.685162180194, 00:40:12.483 "mibps": 76.63939516476638, 00:40:12.483 "io_failed": 0, 00:40:12.483 "io_timeout": 0, 00:40:12.483 "avg_latency_us": 6515.468594214309, 00:40:12.483 "min_latency_us": 2498.56, 00:40:12.483 "max_latency_us": 11741.866666666667 00:40:12.483 } 00:40:12.483 ], 00:40:12.483 "core_count": 1 00:40:12.483 } 00:40:12.483 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:40:12.483 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:40:12.483 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:40:12.483 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:40:12.483 | select(.opcode=="crc32c") 00:40:12.483 | "\(.module_name) \(.executed)"' 00:40:12.483 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:40:12.743 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:40:12.743 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:40:12.743 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:40:12.743 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:40:12.743 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3289277 00:40:12.743 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3289277 ']' 00:40:12.743 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3289277 00:40:12.743 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:40:12.743 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:12.743 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3289277 00:40:12.743 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:12.743 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:12.743 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3289277' 00:40:12.743 killing process with pid 3289277 00:40:12.743 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3289277 00:40:12.743 Received shutdown signal, test time was about 2.000000 seconds 00:40:12.743 00:40:12.743 Latency(us) 00:40:12.743 [2024-10-07T12:50:36.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:12.743 [2024-10-07T12:50:36.452Z] =================================================================================================================== 00:40:12.743 [2024-10-07T12:50:36.452Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:12.743 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3289277 00:40:13.313 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:40:13.313 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:40:13.313 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:40:13.313 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:40:13.313 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:40:13.313 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:40:13.313 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:40:13.313 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3289969 00:40:13.313 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3289969 /var/tmp/bperf.sock 00:40:13.313 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3289969 ']' 00:40:13.313 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:40:13.313 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:13.313 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:13.313 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:13.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:13.313 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:13.313 14:50:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:13.313 [2024-10-07 14:50:36.943730] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:40:13.313 [2024-10-07 14:50:36.943843] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289969 ] 00:40:13.313 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:13.313 Zero copy mechanism will not be used. 00:40:13.574 [2024-10-07 14:50:37.067734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:13.574 [2024-10-07 14:50:37.208375] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:14.144 14:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:14.144 14:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:40:14.144 14:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:40:14.144 14:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:40:14.144 14:50:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:14.404 14:50:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:14.404 14:50:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:14.972 nvme0n1 00:40:14.972 14:50:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:40:14.972 14:50:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:14.972 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:14.972 Zero copy mechanism will not be used. 00:40:14.972 Running I/O for 2 seconds... 00:40:16.849 2879.00 IOPS, 359.88 MiB/s [2024-10-07T12:50:40.558Z] 3148.50 IOPS, 393.56 MiB/s 00:40:16.849 Latency(us) 00:40:16.849 [2024-10-07T12:50:40.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:16.849 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:40:16.849 nvme0n1 : 2.01 3148.86 393.61 0.00 0.00 5074.06 2116.27 9229.65 00:40:16.849 [2024-10-07T12:50:40.558Z] =================================================================================================================== 00:40:16.849 [2024-10-07T12:50:40.558Z] Total : 3148.86 393.61 0.00 0.00 5074.06 2116.27 9229.65 00:40:16.849 { 00:40:16.849 "results": [ 00:40:16.849 { 00:40:16.849 "job": "nvme0n1", 00:40:16.849 "core_mask": "0x2", 00:40:16.849 "workload": "randwrite", 00:40:16.849 "status": "finished", 00:40:16.849 "queue_depth": 16, 00:40:16.849 "io_size": 131072, 00:40:16.849 "runtime": 2.005808, 00:40:16.849 "iops": 3148.8557229804646, 00:40:16.849 "mibps": 393.6069653725581, 00:40:16.849 "io_failed": 0, 00:40:16.849 "io_timeout": 0, 00:40:16.849 "avg_latency_us": 5074.064834283302, 00:40:16.849 "min_latency_us": 2116.266666666667, 00:40:16.849 "max_latency_us": 9229.653333333334 00:40:16.849 } 00:40:16.849 ], 00:40:16.849 "core_count": 1 00:40:16.849 } 00:40:16.849 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:40:16.849 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:40:16.849 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:40:16.849 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:40:16.849 | select(.opcode=="crc32c") 00:40:16.849 | "\(.module_name) \(.executed)"' 00:40:16.849 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:40:17.110 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:40:17.110 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:40:17.110 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:40:17.110 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:40:17.110 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3289969 00:40:17.110 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3289969 ']' 00:40:17.110 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3289969 00:40:17.110 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:40:17.110 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:17.110 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3289969 00:40:17.110 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:17.110 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:17.110 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3289969' 00:40:17.110 killing process with pid 3289969 00:40:17.110 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3289969 00:40:17.110 Received shutdown signal, test time was about 2.000000 seconds 00:40:17.110 00:40:17.110 Latency(us) 00:40:17.110 [2024-10-07T12:50:40.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:17.110 [2024-10-07T12:50:40.819Z] =================================================================================================================== 00:40:17.110 [2024-10-07T12:50:40.819Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:17.110 14:50:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3289969 00:40:17.681 14:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3287250 00:40:17.681 14:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3287250 ']' 00:40:17.681 14:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3287250 00:40:17.681 14:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:40:17.681 14:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:17.681 14:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3287250 00:40:17.681 14:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:17.681 14:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:17.681 14:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3287250' 00:40:17.681 killing process with pid 3287250 00:40:17.681 14:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3287250 00:40:17.681 14:50:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3287250 00:40:18.621 00:40:18.621 real 0m19.742s 00:40:18.621 user 0m37.828s 00:40:18.621 sys 0m3.858s 00:40:18.621 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:18.621 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:40:18.621 ************************************ 00:40:18.621 END TEST nvmf_digest_clean 00:40:18.621 ************************************ 00:40:18.621 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:40:18.621 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:18.621 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:18.621 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:40:18.882 ************************************ 00:40:18.882 START TEST nvmf_digest_error 00:40:18.882 ************************************ 00:40:18.882 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:40:18.882 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:40:18.882 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:18.882 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:18.882 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:18.882 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=3291024 00:40:18.882 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 3291024 00:40:18.882 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:40:18.882 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3291024 ']' 00:40:18.882 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:18.882 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:18.882 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:18.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:18.882 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:18.882 14:50:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:18.882 [2024-10-07 14:50:42.442272] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:40:18.882 [2024-10-07 14:50:42.442389] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:18.882 [2024-10-07 14:50:42.583388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.142 [2024-10-07 14:50:42.765164] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:19.142 [2024-10-07 14:50:42.765209] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:19.142 [2024-10-07 14:50:42.765220] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:19.142 [2024-10-07 14:50:42.765231] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:19.142 [2024-10-07 14:50:42.765240] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:19.142 [2024-10-07 14:50:42.766430] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:19.712 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:19.712 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:40:19.712 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:19.712 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:19.712 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:19.712 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:19.712 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:40:19.712 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.712 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:19.712 [2024-10-07 14:50:43.240153] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:40:19.712 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.712 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:40:19.712 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:40:19.712 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:19.712 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:19.971 null0 00:40:19.971 [2024-10-07 14:50:43.503981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:19.971 [2024-10-07 14:50:43.528250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:19.971 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:19.971 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:40:19.971 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:40:19.971 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:40:19.971 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:40:19.971 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:40:19.971 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3291356 00:40:19.971 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3291356 /var/tmp/bperf.sock 00:40:19.971 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3291356 ']' 00:40:19.971 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:40:19.971 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:19.972 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:19.972 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:19.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:19.972 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:19.972 14:50:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:19.972 [2024-10-07 14:50:43.610314] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:40:19.972 [2024-10-07 14:50:43.610419] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291356 ] 00:40:20.231 [2024-10-07 14:50:43.739509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:20.231 [2024-10-07 14:50:43.887366] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:20.800 14:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:20.800 14:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:40:20.800 14:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:20.800 14:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:21.059 14:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:40:21.059 14:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:21.059 14:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:21.059 14:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:21.059 14:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:21.059 14:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:21.319 nvme0n1 00:40:21.319 14:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:40:21.319 14:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:21.319 14:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:21.319 14:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:21.319 14:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:40:21.319 14:50:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:21.580 Running I/O for 2 seconds... 00:40:21.580 [2024-10-07 14:50:45.100188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.580 [2024-10-07 14:50:45.100234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.580 [2024-10-07 14:50:45.100248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.580 [2024-10-07 14:50:45.111751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.580 [2024-10-07 14:50:45.111778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.580 [2024-10-07 14:50:45.111790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.580 [2024-10-07 14:50:45.126807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.580 [2024-10-07 14:50:45.126831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.580 [2024-10-07 14:50:45.126841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.580 [2024-10-07 14:50:45.141669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.580 [2024-10-07 14:50:45.141692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.580 [2024-10-07 14:50:45.141701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.580 [2024-10-07 14:50:45.156769] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.580 [2024-10-07 14:50:45.156793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.580 [2024-10-07 14:50:45.156802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.580 [2024-10-07 14:50:45.170071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.580 [2024-10-07 14:50:45.170095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.580 [2024-10-07 14:50:45.170104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.580 [2024-10-07 14:50:45.182863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.580 [2024-10-07 14:50:45.182885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.580 [2024-10-07 14:50:45.182894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.580 [2024-10-07 14:50:45.194775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.580 [2024-10-07 14:50:45.194798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.580 [2024-10-07 14:50:45.194807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.580 [2024-10-07 14:50:45.209554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.580 [2024-10-07 14:50:45.209582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.580 [2024-10-07 14:50:45.209592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.580 [2024-10-07 14:50:45.224023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.580 [2024-10-07 14:50:45.224044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.580 [2024-10-07 14:50:45.224053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.580 [2024-10-07 14:50:45.237309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.580 [2024-10-07 14:50:45.237332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.580 [2024-10-07 14:50:45.237341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.580 [2024-10-07 14:50:45.252421] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.580 [2024-10-07 14:50:45.252444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.580 [2024-10-07 14:50:45.252453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.580 [2024-10-07 14:50:45.267566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.580 [2024-10-07 14:50:45.267588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.580 [2024-10-07 14:50:45.267598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.580 [2024-10-07 14:50:45.279223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.580 [2024-10-07 14:50:45.279246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.580 [2024-10-07 14:50:45.279255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.841 [2024-10-07 14:50:45.294018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.841 [2024-10-07 14:50:45.294040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.841 [2024-10-07 14:50:45.294049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.841 [2024-10-07 14:50:45.308282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.841 [2024-10-07 14:50:45.308304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.841 [2024-10-07 14:50:45.308313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.841 [2024-10-07 14:50:45.322369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.841 [2024-10-07 14:50:45.322391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.841 [2024-10-07 14:50:45.322400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.841 [2024-10-07 14:50:45.336828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.841 [2024-10-07 14:50:45.336851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.841 [2024-10-07 14:50:45.336861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.841 [2024-10-07 14:50:45.348144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.841 [2024-10-07 14:50:45.348167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.841 [2024-10-07 14:50:45.348176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.841 [2024-10-07 14:50:45.362494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.841 [2024-10-07 14:50:45.362517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.841 [2024-10-07 14:50:45.362525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.841 [2024-10-07 14:50:45.376907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.841 [2024-10-07 14:50:45.376930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.841 [2024-10-07 14:50:45.376939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.841 [2024-10-07 14:50:45.392439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.841 [2024-10-07 14:50:45.392461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.841 [2024-10-07 14:50:45.392470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.841 [2024-10-07 14:50:45.405672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.841 [2024-10-07 14:50:45.405695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.841 [2024-10-07 14:50:45.405704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.841 [2024-10-07 14:50:45.420248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.841 [2024-10-07 14:50:45.420271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.841 [2024-10-07 14:50:45.420279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.841 [2024-10-07 14:50:45.433845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.841 [2024-10-07 14:50:45.433867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.841 [2024-10-07 14:50:45.433876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.841 [2024-10-07 14:50:45.448607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.841 [2024-10-07 14:50:45.448633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.841 [2024-10-07 14:50:45.448642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.841 [2024-10-07 14:50:45.462065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.841 [2024-10-07 14:50:45.462088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.841 [2024-10-07 14:50:45.462097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.841 [2024-10-07 14:50:45.474974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.841 [2024-10-07 14:50:45.474996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.841 [2024-10-07 14:50:45.475012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.841 [2024-10-07 14:50:45.486764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.841 [2024-10-07 14:50:45.486787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.841 [2024-10-07 14:50:45.486796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.841 [2024-10-07 14:50:45.501519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.841 [2024-10-07 14:50:45.501542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.841 [2024-10-07 14:50:45.501551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.841 [2024-10-07 14:50:45.515590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.841 [2024-10-07 14:50:45.515613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.841 [2024-10-07 14:50:45.515622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.841 [2024-10-07 14:50:45.530393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.841 [2024-10-07 14:50:45.530416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.842 [2024-10-07 14:50:45.530425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:21.842 [2024-10-07 14:50:45.544975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:21.842 [2024-10-07 14:50:45.544997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:21.842 [2024-10-07 14:50:45.545101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.102 [2024-10-07 14:50:45.557623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.102 [2024-10-07 14:50:45.557645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.102 [2024-10-07 14:50:45.557654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.102 [2024-10-07 14:50:45.572797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.102 [2024-10-07 14:50:45.572820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.102 [2024-10-07 14:50:45.572829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.102 [2024-10-07 14:50:45.587369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.102 [2024-10-07 14:50:45.587392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.102 [2024-10-07 14:50:45.587401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.102 [2024-10-07 14:50:45.601500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.102 [2024-10-07 14:50:45.601523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.102 [2024-10-07 14:50:45.601532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.102 [2024-10-07 14:50:45.612279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.102 [2024-10-07 14:50:45.612301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.102 [2024-10-07 14:50:45.612310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.102 [2024-10-07 14:50:45.626197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.102 [2024-10-07 14:50:45.626219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.102 [2024-10-07 14:50:45.626228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.102 [2024-10-07 14:50:45.641526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.102 [2024-10-07 14:50:45.641548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.102 [2024-10-07 14:50:45.641557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.102 [2024-10-07 14:50:45.656063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.102 [2024-10-07 14:50:45.656084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.102 [2024-10-07 14:50:45.656093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.102 [2024-10-07 14:50:45.670986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.102 [2024-10-07 14:50:45.671014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.102 [2024-10-07 14:50:45.671023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.102 [2024-10-07 14:50:45.684350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.102 [2024-10-07 14:50:45.684376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.102 [2024-10-07 14:50:45.684384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.102 [2024-10-07 14:50:45.695898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.102 [2024-10-07 14:50:45.695921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.102 [2024-10-07 14:50:45.695930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.102 [2024-10-07 14:50:45.711146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.102 [2024-10-07 14:50:45.711168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.102 [2024-10-07 14:50:45.711177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.102 [2024-10-07 14:50:45.725644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.102 [2024-10-07 14:50:45.725667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.103 [2024-10-07 14:50:45.725676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.103 [2024-10-07 14:50:45.740372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.103 [2024-10-07 14:50:45.740395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.103 [2024-10-07 14:50:45.740404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.103 [2024-10-07 14:50:45.755250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.103 [2024-10-07 14:50:45.755272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.103 [2024-10-07 14:50:45.755281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.103 [2024-10-07 14:50:45.769070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.103 [2024-10-07 14:50:45.769093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.103 [2024-10-07 14:50:45.769101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.103 [2024-10-07 14:50:45.782347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.103 [2024-10-07 14:50:45.782370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.103 [2024-10-07 14:50:45.782379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.103 [2024-10-07 14:50:45.793886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.103 [2024-10-07 14:50:45.793908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.103 [2024-10-07 14:50:45.793917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.103 [2024-10-07 14:50:45.809649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.103 [2024-10-07 14:50:45.809671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.103 [2024-10-07 14:50:45.809680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.364 [2024-10-07 14:50:45.825346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.364 [2024-10-07 14:50:45.825368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.364 [2024-10-07 14:50:45.825377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.364 [2024-10-07 14:50:45.840092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.364 [2024-10-07 14:50:45.840114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.364 [2024-10-07 14:50:45.840123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.364 [2024-10-07 14:50:45.852196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.364 [2024-10-07 14:50:45.852219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.364 [2024-10-07 14:50:45.852227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.364 [2024-10-07 14:50:45.865701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.364 [2024-10-07 14:50:45.865723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.364 [2024-10-07 14:50:45.865732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.364 [2024-10-07 14:50:45.879338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.364 [2024-10-07 14:50:45.879360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.364 [2024-10-07 14:50:45.879369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.364 [2024-10-07 14:50:45.896084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.364 [2024-10-07 14:50:45.896106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.364 [2024-10-07 14:50:45.896115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.364 [2024-10-07 14:50:45.912262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.364 [2024-10-07 14:50:45.912284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.364 [2024-10-07 14:50:45.912293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.364 [2024-10-07 14:50:45.925538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.364 [2024-10-07 14:50:45.925564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.364 [2024-10-07 14:50:45.925573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.364 [2024-10-07 14:50:45.937459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.364 [2024-10-07 14:50:45.937481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.364 [2024-10-07 14:50:45.937490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.364 [2024-10-07 14:50:45.951459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.364 [2024-10-07 14:50:45.951483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.364 [2024-10-07 14:50:45.951493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.364 [2024-10-07 14:50:45.965657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.364 [2024-10-07 14:50:45.965680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.364 [2024-10-07 14:50:45.965690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.364 [2024-10-07 14:50:45.980229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.364 [2024-10-07 14:50:45.980251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.364 [2024-10-07 14:50:45.980260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.364 [2024-10-07 14:50:45.994834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.364 [2024-10-07 14:50:45.994856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.364 [2024-10-07 14:50:45.994865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.364 [2024-10-07 14:50:46.008804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.364 [2024-10-07 14:50:46.008828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.364 [2024-10-07 14:50:46.008837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.364 [2024-10-07 14:50:46.022716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.364 [2024-10-07 14:50:46.022740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.364 [2024-10-07 14:50:46.022749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.364 [2024-10-07 14:50:46.037406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.364 [2024-10-07 14:50:46.037429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.364 [2024-10-07 14:50:46.037439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.364 [2024-10-07 14:50:46.051192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.364 [2024-10-07 14:50:46.051214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.364 [2024-10-07 14:50:46.051223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.364 [2024-10-07 14:50:46.063093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.364 [2024-10-07 14:50:46.063116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.364 [2024-10-07 14:50:46.063125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 [2024-10-07 14:50:46.079088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.079111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.079121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 18102.00 IOPS, 70.71 MiB/s [2024-10-07T12:50:46.334Z] [2024-10-07 14:50:46.090701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.090724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.090739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 [2024-10-07 14:50:46.105248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.105271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.105279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 [2024-10-07 14:50:46.120157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.120180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.120189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 [2024-10-07 14:50:46.135030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.135052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.135061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 [2024-10-07 14:50:46.148462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.148484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.148493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 [2024-10-07 14:50:46.160877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.160905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.160914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 [2024-10-07 14:50:46.173564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.173587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.173595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 [2024-10-07 14:50:46.189881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.189903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.189913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 [2024-10-07 14:50:46.204660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.204683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.204692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 [2024-10-07 14:50:46.217793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.217816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.217825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 [2024-10-07 14:50:46.229675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.229698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.229707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 [2024-10-07 14:50:46.242889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.242911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.242920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 [2024-10-07 14:50:46.259029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.259052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.259061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 [2024-10-07 14:50:46.273053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.273076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.273085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 [2024-10-07 14:50:46.287353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.287376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.287385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 [2024-10-07 14:50:46.301489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.301511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.301520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 [2024-10-07 14:50:46.311925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.311947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.311956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.625 [2024-10-07 14:50:46.330104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.625 [2024-10-07 14:50:46.330127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.625 [2024-10-07 14:50:46.330137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.340921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.340944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.340953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.356302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.356324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.356332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.371383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.371406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.371415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.384258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.384281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.384290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.399224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.399250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.399260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.413662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.413685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.413694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.427790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.427814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.427822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.440507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.440529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.440538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.453248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.453271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.453280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.468784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.468806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.468815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.482596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.482619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.482628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.496437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.496460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.496469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.510179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.510202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.510211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.524219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.524242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.524251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.535074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.535096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.535105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.549689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.549712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.549721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.564582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.564605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.564613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.578268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.578291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.578300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:22.886 [2024-10-07 14:50:46.592358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:22.886 [2024-10-07 14:50:46.592381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:22.886 [2024-10-07 14:50:46.592390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.146 [2024-10-07 14:50:46.607595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.146 [2024-10-07 14:50:46.607618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.147 [2024-10-07 14:50:46.607627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.147 [2024-10-07 14:50:46.619931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.147 [2024-10-07 14:50:46.619954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.147 [2024-10-07 14:50:46.619963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.147 [2024-10-07 14:50:46.632724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.147 [2024-10-07 14:50:46.632747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.147 [2024-10-07 14:50:46.632762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.147 [2024-10-07 14:50:46.647623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.147 [2024-10-07 14:50:46.647646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.147 [2024-10-07 14:50:46.647654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.147 [2024-10-07 14:50:46.661973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.147 [2024-10-07 14:50:46.661996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.147 [2024-10-07 14:50:46.662011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.147 [2024-10-07 14:50:46.676231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.147 [2024-10-07 14:50:46.676254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.147 [2024-10-07 14:50:46.676262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.147 [2024-10-07 14:50:46.690260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.147 [2024-10-07 14:50:46.690288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.147 [2024-10-07 14:50:46.690297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.147 [2024-10-07 14:50:46.703389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.147 [2024-10-07 14:50:46.703411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.147 [2024-10-07 14:50:46.703419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.147 [2024-10-07 14:50:46.717895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.147 [2024-10-07 14:50:46.717917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.147 [2024-10-07 14:50:46.717925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.147 [2024-10-07 14:50:46.730856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.147 [2024-10-07 14:50:46.730879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.147 [2024-10-07 14:50:46.730887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.147 [2024-10-07 14:50:46.743390] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.147 [2024-10-07 14:50:46.743412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.147 [2024-10-07 14:50:46.743421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.147 [2024-10-07 14:50:46.757903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.147 [2024-10-07 14:50:46.757925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.147 [2024-10-07 14:50:46.757934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.147 [2024-10-07 14:50:46.771559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.147 [2024-10-07 14:50:46.771582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.147 [2024-10-07 14:50:46.771591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.147 [2024-10-07 14:50:46.787668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.147 [2024-10-07 14:50:46.787690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.147 [2024-10-07 14:50:46.787699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.147 [2024-10-07 14:50:46.801273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.147 [2024-10-07 14:50:46.801295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.147 [2024-10-07 14:50:46.801304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.147 [2024-10-07 14:50:46.814543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.147 [2024-10-07 14:50:46.814566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.147 [2024-10-07 14:50:46.814575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.147 [2024-10-07 14:50:46.829686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.147 [2024-10-07 14:50:46.829709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.147 [2024-10-07 14:50:46.829718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.147 [2024-10-07 14:50:46.842929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.147 [2024-10-07 14:50:46.842951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.147 [2024-10-07 14:50:46.842960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.408 [2024-10-07 14:50:46.857529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.408 [2024-10-07 14:50:46.857551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.408 [2024-10-07 14:50:46.857560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.408 [2024-10-07 14:50:46.871824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.408 [2024-10-07 14:50:46.871846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.408 [2024-10-07 14:50:46.871858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.408 [2024-10-07 14:50:46.881821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.408 [2024-10-07 14:50:46.881843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.408 [2024-10-07 14:50:46.881852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.408 [2024-10-07 14:50:46.897683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.408 [2024-10-07 14:50:46.897705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.408 [2024-10-07 14:50:46.897714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.408 [2024-10-07 14:50:46.909754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.408 [2024-10-07 14:50:46.909776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.408 [2024-10-07 14:50:46.909785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.408 [2024-10-07 14:50:46.925183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.408 [2024-10-07 14:50:46.925206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.408 [2024-10-07 14:50:46.925214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.408 [2024-10-07 14:50:46.937682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.408 [2024-10-07 14:50:46.937704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.408 [2024-10-07 14:50:46.937713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.408 [2024-10-07 14:50:46.953485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.408 [2024-10-07 14:50:46.953508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.408 [2024-10-07 14:50:46.953517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.408 [2024-10-07 14:50:46.967893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.408 [2024-10-07 14:50:46.967915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.408 [2024-10-07 14:50:46.967924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.408 [2024-10-07 14:50:46.982405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.408 [2024-10-07 14:50:46.982427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.408 [2024-10-07 14:50:46.982436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.408 [2024-10-07 14:50:46.994906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.408 [2024-10-07 14:50:46.994928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.408 [2024-10-07 14:50:46.994937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.408 [2024-10-07 14:50:47.009177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.408 [2024-10-07 14:50:47.009199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.408 [2024-10-07 14:50:47.009208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.408 [2024-10-07 14:50:47.023736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.408 [2024-10-07 14:50:47.023758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.408 [2024-10-07 14:50:47.023767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.408 [2024-10-07 14:50:47.038184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.408 [2024-10-07 14:50:47.038207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.408 [2024-10-07 14:50:47.038216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.408 [2024-10-07 14:50:47.051751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.408 [2024-10-07 14:50:47.051774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.408 [2024-10-07 14:50:47.051783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.408 [2024-10-07 14:50:47.065417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.408 [2024-10-07 14:50:47.065439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.408 [2024-10-07 14:50:47.065448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.408 [2024-10-07 14:50:47.078235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:23.408 [2024-10-07 14:50:47.078257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:23.408 [2024-10-07 14:50:47.078265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:23.408 18250.50 IOPS, 71.29 MiB/s 00:40:23.408 Latency(us) 00:40:23.408 [2024-10-07T12:50:47.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:23.408 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:23.408 nvme0n1 : 2.00 18274.91 71.39 0.00 0.00 6997.82 2416.64 20534.61 00:40:23.408 [2024-10-07T12:50:47.117Z] =================================================================================================================== 00:40:23.408 [2024-10-07T12:50:47.117Z] Total : 18274.91 71.39 0.00 0.00 6997.82 2416.64 20534.61 00:40:23.408 { 00:40:23.408 "results": [ 00:40:23.408 { 00:40:23.408 "job": "nvme0n1", 00:40:23.408 "core_mask": "0x2", 00:40:23.408 "workload": "randread", 00:40:23.408 "status": "finished", 00:40:23.408 "queue_depth": 128, 00:40:23.408 "io_size": 4096, 00:40:23.408 "runtime": 2.004333, 00:40:23.408 "iops": 18274.907413089542, 00:40:23.408 "mibps": 71.38635708238102, 00:40:23.408 "io_failed": 0, 00:40:23.408 "io_timeout": 0, 00:40:23.408 "avg_latency_us": 6997.816412860484, 00:40:23.408 "min_latency_us": 2416.64, 00:40:23.408 "max_latency_us": 20534.613333333335 00:40:23.408 } 00:40:23.408 ], 00:40:23.408 "core_count": 1 00:40:23.408 } 00:40:23.408 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:40:23.408 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:40:23.408 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:40:23.408 | .driver_specific 00:40:23.408 | .nvme_error 00:40:23.408 | .status_code 00:40:23.408 | .command_transient_transport_error' 00:40:23.408 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:40:23.671 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:40:23.671 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3291356 00:40:23.671 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3291356 ']' 00:40:23.671 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3291356 00:40:23.671 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:40:23.671 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:23.671 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3291356 00:40:23.671 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:23.671 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:23.671 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3291356' 00:40:23.671 killing process with pid 3291356 00:40:23.671 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3291356 00:40:23.671 Received shutdown signal, test time was about 2.000000 seconds 00:40:23.671 00:40:23.671 Latency(us) 00:40:23.671 [2024-10-07T12:50:47.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:23.671 [2024-10-07T12:50:47.380Z] =================================================================================================================== 00:40:23.671 [2024-10-07T12:50:47.380Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:23.671 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3291356 00:40:24.241 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:40:24.241 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:40:24.241 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:40:24.241 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:40:24.241 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:40:24.241 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3292128 00:40:24.241 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3292128 /var/tmp/bperf.sock 00:40:24.241 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3292128 ']' 00:40:24.241 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:40:24.241 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:24.241 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:24.241 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:24.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:24.241 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:24.241 14:50:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:24.501 [2024-10-07 14:50:47.982716] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:40:24.501 [2024-10-07 14:50:47.982823] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292128 ] 00:40:24.501 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:24.501 Zero copy mechanism will not be used. 00:40:24.501 [2024-10-07 14:50:48.107366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:24.761 [2024-10-07 14:50:48.245081] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:25.332 14:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:25.332 14:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:40:25.332 14:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:25.332 14:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:25.332 14:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:40:25.332 14:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.332 14:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:25.332 14:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.332 14:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:25.332 14:50:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:25.592 nvme0n1 00:40:25.592 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:40:25.592 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:25.592 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:25.592 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:25.592 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:40:25.592 14:50:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:25.854 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:25.854 Zero copy mechanism will not be used. 00:40:25.854 Running I/O for 2 seconds... 00:40:25.854 [2024-10-07 14:50:49.329704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.329746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.854 [2024-10-07 14:50:49.329764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.854 [2024-10-07 14:50:49.341102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.341132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.854 [2024-10-07 14:50:49.341143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.854 [2024-10-07 14:50:49.350989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.351021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.854 [2024-10-07 14:50:49.351031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.854 [2024-10-07 14:50:49.356433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.356456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.854 [2024-10-07 14:50:49.356465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.854 [2024-10-07 14:50:49.365580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.365604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.854 [2024-10-07 14:50:49.365613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.854 [2024-10-07 14:50:49.373647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.373669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.854 [2024-10-07 14:50:49.373679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.854 [2024-10-07 14:50:49.382630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.382652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.854 [2024-10-07 14:50:49.382661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.854 [2024-10-07 14:50:49.391888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.391911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.854 [2024-10-07 14:50:49.391921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.854 [2024-10-07 14:50:49.401671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.401694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.854 [2024-10-07 14:50:49.401703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.854 [2024-10-07 14:50:49.410565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.410588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.854 [2024-10-07 14:50:49.410598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.854 [2024-10-07 14:50:49.416782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.416803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.854 [2024-10-07 14:50:49.416812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.854 [2024-10-07 14:50:49.423115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.423137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.854 [2024-10-07 14:50:49.423147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.854 [2024-10-07 14:50:49.433246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.433267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.854 [2024-10-07 14:50:49.433277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.854 [2024-10-07 14:50:49.443313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.443338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.854 [2024-10-07 14:50:49.443348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.854 [2024-10-07 14:50:49.451093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.451116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.854 [2024-10-07 14:50:49.451125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.854 [2024-10-07 14:50:49.458372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.458394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.854 [2024-10-07 14:50:49.458404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.854 [2024-10-07 14:50:49.468416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.468440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.854 [2024-10-07 14:50:49.468449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.854 [2024-10-07 14:50:49.476174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.476197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.854 [2024-10-07 14:50:49.476210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.854 [2024-10-07 14:50:49.485471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.854 [2024-10-07 14:50:49.485493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.855 [2024-10-07 14:50:49.485502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.855 [2024-10-07 14:50:49.495368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.855 [2024-10-07 14:50:49.495391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.855 [2024-10-07 14:50:49.495400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.855 [2024-10-07 14:50:49.505165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.855 [2024-10-07 14:50:49.505187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.855 [2024-10-07 14:50:49.505196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.855 [2024-10-07 14:50:49.514592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.855 [2024-10-07 14:50:49.514615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.855 [2024-10-07 14:50:49.514624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.855 [2024-10-07 14:50:49.523989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.855 [2024-10-07 14:50:49.524017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.855 [2024-10-07 14:50:49.524026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.855 [2024-10-07 14:50:49.529887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.855 [2024-10-07 14:50:49.529909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.855 [2024-10-07 14:50:49.529918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:25.855 [2024-10-07 14:50:49.536893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.855 [2024-10-07 14:50:49.536915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.855 [2024-10-07 14:50:49.536924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:25.855 [2024-10-07 14:50:49.544724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.855 [2024-10-07 14:50:49.544746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.855 [2024-10-07 14:50:49.544755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:25.855 [2024-10-07 14:50:49.551162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.855 [2024-10-07 14:50:49.551185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.855 [2024-10-07 14:50:49.551195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:25.855 [2024-10-07 14:50:49.558229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:25.855 [2024-10-07 14:50:49.558251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:25.855 [2024-10-07 14:50:49.558260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.567185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.567208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.567217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.578549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.578571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.578580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.589206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.589228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.589237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.600610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.600633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.600642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.613073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.613096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.613105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.622690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.622715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.622725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.634180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.634204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.634217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.645158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.645182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.645191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.655729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.655753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.655762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.660163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.660186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.660195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.666561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.666585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.666593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.673343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.673366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.673375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.680839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.680862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.680871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.686383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.686407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.686415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.694297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.694321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.694330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.701880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.701904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.701913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.710700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.710724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.710733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.720448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.720473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.720481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.731886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.731910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.731919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.743440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.743465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.743473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.755309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.755333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.755342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.767495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.767519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.767528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.779720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.779743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.779752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.792236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.117 [2024-10-07 14:50:49.792261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.117 [2024-10-07 14:50:49.792274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.117 [2024-10-07 14:50:49.803808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.118 [2024-10-07 14:50:49.803831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.118 [2024-10-07 14:50:49.803840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.118 [2024-10-07 14:50:49.815364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.118 [2024-10-07 14:50:49.815388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.118 [2024-10-07 14:50:49.815397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.826394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.826418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.826427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.838393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.838417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.838426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.848170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.848194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.848203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.858300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.858325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.858334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.865435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.865458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.865468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.872040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.872063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.872072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.880365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.880390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.880399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.886677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.886701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.886710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.892417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.892441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.892450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.899561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.899585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.899594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.908066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.908090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.908100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.916534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.916558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.916567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.922506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.922529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.922538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.931486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.931509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.931518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.940838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.940862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.940875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.949802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.949826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.949834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.958681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.958705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.958714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.968036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.968059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.968074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.976973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.976997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.977011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.986409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.379 [2024-10-07 14:50:49.986433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.379 [2024-10-07 14:50:49.986441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.379 [2024-10-07 14:50:49.991932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.380 [2024-10-07 14:50:49.991955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.380 [2024-10-07 14:50:49.991964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.380 [2024-10-07 14:50:49.999864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.380 [2024-10-07 14:50:49.999888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.380 [2024-10-07 14:50:49.999897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.380 [2024-10-07 14:50:50.008827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.380 [2024-10-07 14:50:50.008852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.380 [2024-10-07 14:50:50.008861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.380 [2024-10-07 14:50:50.014708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.380 [2024-10-07 14:50:50.014732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.380 [2024-10-07 14:50:50.014742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.380 [2024-10-07 14:50:50.022510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.380 [2024-10-07 14:50:50.022536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.380 [2024-10-07 14:50:50.022545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.380 [2024-10-07 14:50:50.030388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.380 [2024-10-07 14:50:50.030411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.380 [2024-10-07 14:50:50.030421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.380 [2024-10-07 14:50:50.038487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.380 [2024-10-07 14:50:50.038511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.380 [2024-10-07 14:50:50.038520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.380 [2024-10-07 14:50:50.046145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.380 [2024-10-07 14:50:50.046168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.380 [2024-10-07 14:50:50.046177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.380 [2024-10-07 14:50:50.054498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.380 [2024-10-07 14:50:50.054522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.380 [2024-10-07 14:50:50.054531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.380 [2024-10-07 14:50:50.062126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.380 [2024-10-07 14:50:50.062150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.380 [2024-10-07 14:50:50.062159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.380 [2024-10-07 14:50:50.073536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.380 [2024-10-07 14:50:50.073560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.380 [2024-10-07 14:50:50.073569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.380 [2024-10-07 14:50:50.082840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.380 [2024-10-07 14:50:50.082865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.380 [2024-10-07 14:50:50.082878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.088726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.088750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.088759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.098616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.098641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.098650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.107532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.107558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.107567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.117213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.117237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.117246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.127066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.127090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.127099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.136050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.136074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.136083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.144322] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.144346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.144355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.153159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.153184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.153193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.161378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.161407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.161417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.169434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.169458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.169467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.176410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.176433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.176442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.184723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.184745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.184754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.190248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.190272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.190281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.197248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.197271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.197280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.202778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.202801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.202810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.212911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.212932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.212941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.218863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.218885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.218898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.226164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.226186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.226195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.235349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.235372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.235381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.243819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.243842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.243852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.251752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.251774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.644 [2024-10-07 14:50:50.251783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.644 [2024-10-07 14:50:50.262151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.644 [2024-10-07 14:50:50.262174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.645 [2024-10-07 14:50:50.262183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.645 [2024-10-07 14:50:50.269652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.645 [2024-10-07 14:50:50.269674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.645 [2024-10-07 14:50:50.269683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.645 [2024-10-07 14:50:50.276290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.645 [2024-10-07 14:50:50.276312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.645 [2024-10-07 14:50:50.276322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.645 [2024-10-07 14:50:50.286901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.645 [2024-10-07 14:50:50.286922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.645 [2024-10-07 14:50:50.286932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.645 [2024-10-07 14:50:50.292613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.645 [2024-10-07 14:50:50.292641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.645 [2024-10-07 14:50:50.292650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.645 [2024-10-07 14:50:50.301635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.645 [2024-10-07 14:50:50.301657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.645 [2024-10-07 14:50:50.301666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.645 [2024-10-07 14:50:50.311283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.645 [2024-10-07 14:50:50.311305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.645 [2024-10-07 14:50:50.311314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.645 3551.00 IOPS, 443.88 MiB/s [2024-10-07T12:50:50.354Z] [2024-10-07 14:50:50.320450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.645 [2024-10-07 14:50:50.320480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.645 [2024-10-07 14:50:50.320489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.645 [2024-10-07 14:50:50.325628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.645 [2024-10-07 14:50:50.325650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.645 [2024-10-07 14:50:50.325659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.645 [2024-10-07 14:50:50.333304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.645 [2024-10-07 14:50:50.333327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.645 [2024-10-07 14:50:50.333336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.645 [2024-10-07 14:50:50.339540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.645 [2024-10-07 14:50:50.339562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.645 [2024-10-07 14:50:50.339571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.645 [2024-10-07 14:50:50.345754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.645 [2024-10-07 14:50:50.345780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.645 [2024-10-07 14:50:50.345789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.907 [2024-10-07 14:50:50.355845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.907 [2024-10-07 14:50:50.355867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.907 [2024-10-07 14:50:50.355881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.907 [2024-10-07 14:50:50.364871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.907 [2024-10-07 14:50:50.364895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.364904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.373420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.373443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.373452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.382828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.382851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.382860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.392502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.392525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.392534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.400814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.400838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.400846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.412903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.412926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.412935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.424637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.424660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.424669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.436723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.436745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.436754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.446431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.446457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.446466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.456270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.456293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.456302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.465923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.465946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.465955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.477776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.477799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.477808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.489239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.489262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.489271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.501166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.501190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.501199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.512848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.512871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.512879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.525403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.525426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.525435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.537231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.537254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.537263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.549195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.549218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.549227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.559702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.559726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.559735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.570056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.570079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.570088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.580649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.580672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.580681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.592369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.592392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.592400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.603222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.603245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.603254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:26.908 [2024-10-07 14:50:50.614367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:26.908 [2024-10-07 14:50:50.614390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:26.908 [2024-10-07 14:50:50.614398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:27.171 [2024-10-07 14:50:50.623547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.171 [2024-10-07 14:50:50.623570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.171 [2024-10-07 14:50:50.623579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:27.171 [2024-10-07 14:50:50.635097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.171 [2024-10-07 14:50:50.635124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.171 [2024-10-07 14:50:50.635133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:27.171 [2024-10-07 14:50:50.646726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.171 [2024-10-07 14:50:50.646749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.171 [2024-10-07 14:50:50.646757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:27.171 [2024-10-07 14:50:50.658447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.171 [2024-10-07 14:50:50.658470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.171 [2024-10-07 14:50:50.658479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:27.171 [2024-10-07 14:50:50.667755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.171 [2024-10-07 14:50:50.667777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.171 [2024-10-07 14:50:50.667786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.679687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.679709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.679718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.690451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.690473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.690482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.701395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.701419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.701428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.709679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.709702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.709711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.719101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.719124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.719134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.729264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.729288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.729297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.738422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.738445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.738454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.747620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.747642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.747651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.759345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.759368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.759377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.770879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.770903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.770911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.779499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.779523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.779531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.788330] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.788353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.788362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.799275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.799298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.799307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.807697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.807724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.807732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.815401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.815424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.815433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.826333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.826356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.826365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.835979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.836007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.836016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.847307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.847330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.847339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.858477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.858500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.858509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:27.172 [2024-10-07 14:50:50.869827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.172 [2024-10-07 14:50:50.869851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.172 [2024-10-07 14:50:50.869859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:27.433 [2024-10-07 14:50:50.881277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.433 [2024-10-07 14:50:50.881301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.433 [2024-10-07 14:50:50.881310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:27.433 [2024-10-07 14:50:50.890948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.433 [2024-10-07 14:50:50.890970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.433 [2024-10-07 14:50:50.890979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:27.433 [2024-10-07 14:50:50.902841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.433 [2024-10-07 14:50:50.902865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.433 [2024-10-07 14:50:50.902875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:27.433 [2024-10-07 14:50:50.913992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.433 [2024-10-07 14:50:50.914020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.433 [2024-10-07 14:50:50.914030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:27.433 [2024-10-07 14:50:50.925884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.433 [2024-10-07 14:50:50.925907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.433 [2024-10-07 14:50:50.925916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:27.433 [2024-10-07 14:50:50.936208] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.433 [2024-10-07 14:50:50.936230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.433 [2024-10-07 14:50:50.936239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:27.433 [2024-10-07 14:50:50.948625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.433 [2024-10-07 14:50:50.948648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.433 [2024-10-07 14:50:50.948657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:27.433 [2024-10-07 14:50:50.957715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.434 [2024-10-07 14:50:50.957739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.434 [2024-10-07 14:50:50.957748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:27.434 [2024-10-07 14:50:50.968981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.434 [2024-10-07 14:50:50.969011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.434 [2024-10-07 14:50:50.969020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:27.434 [2024-10-07 14:50:50.979713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.434 [2024-10-07 14:50:50.979736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.434 [2024-10-07 14:50:50.979746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:27.434 [2024-10-07 14:50:50.990621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.434 [2024-10-07 14:50:50.990645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.434 [2024-10-07 14:50:50.990657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:27.434 [2024-10-07 14:50:51.002200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.434 [2024-10-07 14:50:51.002223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.434 [2024-10-07 14:50:51.002233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:27.434 [2024-10-07 14:50:51.013939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.434 [2024-10-07 14:50:51.013962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.434 [2024-10-07 14:50:51.013971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:27.434 [2024-10-07 14:50:51.024526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.434 [2024-10-07 14:50:51.024550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.434 [2024-10-07 14:50:51.024559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:27.434 [2024-10-07 14:50:51.035722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.434 [2024-10-07 14:50:51.035745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.434 [2024-10-07 14:50:51.035754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:27.434 [2024-10-07 14:50:51.048364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.434 [2024-10-07 14:50:51.048387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.434 [2024-10-07 14:50:51.048396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:27.434 [2024-10-07 14:50:51.060335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.434 [2024-10-07 14:50:51.060358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.434 [2024-10-07 14:50:51.060367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:27.434 [2024-10-07 14:50:51.073249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.434 [2024-10-07 14:50:51.073272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.434 [2024-10-07 14:50:51.073280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:27.434 [2024-10-07 14:50:51.084598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.434 [2024-10-07 14:50:51.084621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.434 [2024-10-07 14:50:51.084630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:27.434 [2024-10-07 14:50:51.096320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.434 [2024-10-07 14:50:51.096343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.434 [2024-10-07 14:50:51.096352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:27.434 [2024-10-07 14:50:51.105827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.434 [2024-10-07 14:50:51.105849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.434 [2024-10-07 14:50:51.105858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:27.434 [2024-10-07 14:50:51.115832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.434 [2024-10-07 14:50:51.115855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.434 [2024-10-07 14:50:51.115865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:27.434 [2024-10-07 14:50:51.127748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.434 [2024-10-07 14:50:51.127772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.434 [2024-10-07 14:50:51.127781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:27.434 [2024-10-07 14:50:51.139975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.434 [2024-10-07 14:50:51.139999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.434 [2024-10-07 14:50:51.140014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:27.694 [2024-10-07 14:50:51.152346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.694 [2024-10-07 14:50:51.152371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.695 [2024-10-07 14:50:51.152379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:27.695 [2024-10-07 14:50:51.161401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.695 [2024-10-07 14:50:51.161424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.695 [2024-10-07 14:50:51.161433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:27.695 [2024-10-07 14:50:51.170409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.695 [2024-10-07 14:50:51.170431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.695 [2024-10-07 14:50:51.170440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:27.695 [2024-10-07 14:50:51.179653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.695 [2024-10-07 14:50:51.179676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.695 [2024-10-07 14:50:51.179689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:27.695 [2024-10-07 14:50:51.191416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.695 [2024-10-07 14:50:51.191439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.695 [2024-10-07 14:50:51.191448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:27.695 [2024-10-07 14:50:51.201645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.695 [2024-10-07 14:50:51.201669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.695 [2024-10-07 14:50:51.201678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:27.695 [2024-10-07 14:50:51.209987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.695 [2024-10-07 14:50:51.210016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.695 [2024-10-07 14:50:51.210045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:27.695 [2024-10-07 14:50:51.221129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.695 [2024-10-07 14:50:51.221152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.695 [2024-10-07 14:50:51.221161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:27.695 [2024-10-07 14:50:51.232949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.695 [2024-10-07 14:50:51.232972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.695 [2024-10-07 14:50:51.232981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:27.695 [2024-10-07 14:50:51.244664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.695 [2024-10-07 14:50:51.244687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.695 [2024-10-07 14:50:51.244695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:27.695 [2024-10-07 14:50:51.255104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.695 [2024-10-07 14:50:51.255127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.695 [2024-10-07 14:50:51.255135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:27.695 [2024-10-07 14:50:51.265353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.695 [2024-10-07 14:50:51.265379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.695 [2024-10-07 14:50:51.265388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:27.695 [2024-10-07 14:50:51.275875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.695 [2024-10-07 14:50:51.275899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.695 [2024-10-07 14:50:51.275908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:27.695 [2024-10-07 14:50:51.285432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.695 [2024-10-07 14:50:51.285455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.695 [2024-10-07 14:50:51.285464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:27.695 [2024-10-07 14:50:51.296810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.695 [2024-10-07 14:50:51.296834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.695 [2024-10-07 14:50:51.296843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:27.695 [2024-10-07 14:50:51.308213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.695 [2024-10-07 14:50:51.308237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.695 [2024-10-07 14:50:51.308248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:27.695 [2024-10-07 14:50:51.319817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500039ec00) 00:40:27.695 [2024-10-07 14:50:51.319840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:27.695 [2024-10-07 14:50:51.319849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:27.695 3260.50 IOPS, 407.56 MiB/s 00:40:27.695 Latency(us) 00:40:27.695 [2024-10-07T12:50:51.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:27.695 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:40:27.695 nvme0n1 : 2.01 3259.11 407.39 0.00 0.00 4905.86 942.08 12997.97 00:40:27.695 [2024-10-07T12:50:51.404Z] =================================================================================================================== 00:40:27.695 [2024-10-07T12:50:51.404Z] Total : 3259.11 407.39 0.00 0.00 4905.86 942.08 12997.97 00:40:27.695 { 00:40:27.695 "results": [ 00:40:27.695 { 00:40:27.695 "job": "nvme0n1", 00:40:27.695 "core_mask": "0x2", 00:40:27.695 "workload": "randread", 00:40:27.695 "status": "finished", 00:40:27.695 "queue_depth": 16, 00:40:27.695 "io_size": 131072, 00:40:27.695 "runtime": 2.005765, 00:40:27.695 "iops": 3259.1056280272114, 00:40:27.695 "mibps": 407.3882035034014, 00:40:27.695 "io_failed": 0, 00:40:27.695 "io_timeout": 0, 00:40:27.695 "avg_latency_us": 4905.864374075774, 00:40:27.695 "min_latency_us": 942.08, 00:40:27.695 "max_latency_us": 12997.973333333333 00:40:27.695 } 00:40:27.695 ], 00:40:27.695 "core_count": 1 00:40:27.695 } 00:40:27.695 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:40:27.695 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:40:27.695 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:40:27.695 | .driver_specific 00:40:27.695 | .nvme_error 00:40:27.695 | .status_code 00:40:27.695 | .command_transient_transport_error' 00:40:27.695 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:40:27.955 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 210 > 0 )) 00:40:27.955 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3292128 00:40:27.955 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3292128 ']' 00:40:27.955 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3292128 00:40:27.955 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:40:27.955 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:27.955 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3292128 00:40:27.955 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:27.955 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:27.955 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3292128' 00:40:27.955 killing process with pid 3292128 00:40:27.955 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3292128 00:40:27.955 Received shutdown signal, test time was about 2.000000 seconds 00:40:27.955 00:40:27.955 Latency(us) 00:40:27.955 [2024-10-07T12:50:51.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:27.955 [2024-10-07T12:50:51.664Z] =================================================================================================================== 00:40:27.955 [2024-10-07T12:50:51.664Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:27.955 14:50:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3292128 00:40:28.525 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:40:28.525 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:40:28.525 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:40:28.525 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:40:28.525 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:40:28.525 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3293056 00:40:28.525 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3293056 /var/tmp/bperf.sock 00:40:28.525 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3293056 ']' 00:40:28.526 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:40:28.526 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:28.526 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:28.526 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:28.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:28.526 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:28.526 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:28.526 [2024-10-07 14:50:52.210093] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:40:28.526 [2024-10-07 14:50:52.210206] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293056 ] 00:40:28.784 [2024-10-07 14:50:52.334260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:28.784 [2024-10-07 14:50:52.470898] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:29.353 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:29.353 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:40:29.353 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:29.353 14:50:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:29.613 14:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:40:29.613 14:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:29.613 14:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:29.613 14:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:29.613 14:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:29.613 14:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:29.873 nvme0n1 00:40:29.873 14:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:40:29.873 14:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:29.873 14:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:29.873 14:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:29.873 14:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:40:29.873 14:50:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:29.873 Running I/O for 2 seconds... 00:40:29.874 [2024-10-07 14:50:53.561213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:29.874 [2024-10-07 14:50:53.561446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.874 [2024-10-07 14:50:53.561481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:29.874 [2024-10-07 14:50:53.575425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:29.874 [2024-10-07 14:50:53.575642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:29.874 [2024-10-07 14:50:53.575668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.134 [2024-10-07 14:50:53.589550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.134 [2024-10-07 14:50:53.589760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.134 [2024-10-07 14:50:53.589781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.134 [2024-10-07 14:50:53.603659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.134 [2024-10-07 14:50:53.603868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.134 [2024-10-07 14:50:53.603889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.134 [2024-10-07 14:50:53.617787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.134 [2024-10-07 14:50:53.617994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.134 [2024-10-07 14:50:53.618020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.134 [2024-10-07 14:50:53.631896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.134 [2024-10-07 14:50:53.632110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.134 [2024-10-07 14:50:53.632131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.134 [2024-10-07 14:50:53.646025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.135 [2024-10-07 14:50:53.646230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.135 [2024-10-07 14:50:53.646250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.135 [2024-10-07 14:50:53.660128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.135 [2024-10-07 14:50:53.660333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.135 [2024-10-07 14:50:53.660353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.135 [2024-10-07 14:50:53.674238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.135 [2024-10-07 14:50:53.674443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.135 [2024-10-07 14:50:53.674463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.135 [2024-10-07 14:50:53.688335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.135 [2024-10-07 14:50:53.688541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.135 [2024-10-07 14:50:53.688561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.135 [2024-10-07 14:50:53.702423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.135 [2024-10-07 14:50:53.702628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.135 [2024-10-07 14:50:53.702649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.135 [2024-10-07 14:50:53.716541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.135 [2024-10-07 14:50:53.716744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.135 [2024-10-07 14:50:53.716769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.135 [2024-10-07 14:50:53.730634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.135 [2024-10-07 14:50:53.730839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.135 [2024-10-07 14:50:53.730860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.135 [2024-10-07 14:50:53.744716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.135 [2024-10-07 14:50:53.744920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.135 [2024-10-07 14:50:53.744940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.135 [2024-10-07 14:50:53.758795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.135 [2024-10-07 14:50:53.759007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.135 [2024-10-07 14:50:53.759028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.135 [2024-10-07 14:50:53.772881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.135 [2024-10-07 14:50:53.773094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.135 [2024-10-07 14:50:53.773114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.135 [2024-10-07 14:50:53.786986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.135 [2024-10-07 14:50:53.787198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.135 [2024-10-07 14:50:53.787219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.135 [2024-10-07 14:50:53.801093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.135 [2024-10-07 14:50:53.801296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.135 [2024-10-07 14:50:53.801316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.135 [2024-10-07 14:50:53.815402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.135 [2024-10-07 14:50:53.815606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.135 [2024-10-07 14:50:53.815626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.135 [2024-10-07 14:50:53.829509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.135 [2024-10-07 14:50:53.829712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.135 [2024-10-07 14:50:53.829732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.395 [2024-10-07 14:50:53.843603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.395 [2024-10-07 14:50:53.843814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.395 [2024-10-07 14:50:53.843834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.395 [2024-10-07 14:50:53.857665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.395 [2024-10-07 14:50:53.857869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.395 [2024-10-07 14:50:53.857890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.395 [2024-10-07 14:50:53.871797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.395 [2024-10-07 14:50:53.872008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.395 [2024-10-07 14:50:53.872029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.395 [2024-10-07 14:50:53.885896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.395 [2024-10-07 14:50:53.886108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.395 [2024-10-07 14:50:53.886128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.395 [2024-10-07 14:50:53.899980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.395 [2024-10-07 14:50:53.900193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.395 [2024-10-07 14:50:53.900214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.395 [2024-10-07 14:50:53.914057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.395 [2024-10-07 14:50:53.914264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.396 [2024-10-07 14:50:53.914284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.396 [2024-10-07 14:50:53.928133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.396 [2024-10-07 14:50:53.928339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.396 [2024-10-07 14:50:53.928359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.396 [2024-10-07 14:50:53.942196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.396 [2024-10-07 14:50:53.942400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.396 [2024-10-07 14:50:53.942421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.396 [2024-10-07 14:50:53.956348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.396 [2024-10-07 14:50:53.956555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.396 [2024-10-07 14:50:53.956575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.396 [2024-10-07 14:50:53.970408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.396 [2024-10-07 14:50:53.970613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.396 [2024-10-07 14:50:53.970634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.396 [2024-10-07 14:50:53.984523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.396 [2024-10-07 14:50:53.984727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.396 [2024-10-07 14:50:53.984748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.396 [2024-10-07 14:50:53.998576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.396 [2024-10-07 14:50:53.998790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.396 [2024-10-07 14:50:53.998810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.396 [2024-10-07 14:50:54.012718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.396 [2024-10-07 14:50:54.012920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.396 [2024-10-07 14:50:54.012940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.396 [2024-10-07 14:50:54.026790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.396 [2024-10-07 14:50:54.026993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.396 [2024-10-07 14:50:54.027018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.396 [2024-10-07 14:50:54.040910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.396 [2024-10-07 14:50:54.041122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.396 [2024-10-07 14:50:54.041143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.396 [2024-10-07 14:50:54.054954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.396 [2024-10-07 14:50:54.055163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.396 [2024-10-07 14:50:54.055184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.396 [2024-10-07 14:50:54.069065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.396 [2024-10-07 14:50:54.069274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.396 [2024-10-07 14:50:54.069294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.396 [2024-10-07 14:50:54.083142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.396 [2024-10-07 14:50:54.083347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.396 [2024-10-07 14:50:54.083370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.396 [2024-10-07 14:50:54.097220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.396 [2024-10-07 14:50:54.097424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.396 [2024-10-07 14:50:54.097444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.656 [2024-10-07 14:50:54.111308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.656 [2024-10-07 14:50:54.111510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.656 [2024-10-07 14:50:54.111531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.656 [2024-10-07 14:50:54.125381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.656 [2024-10-07 14:50:54.125588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.656 [2024-10-07 14:50:54.125608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.656 [2024-10-07 14:50:54.139472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.656 [2024-10-07 14:50:54.139674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.656 [2024-10-07 14:50:54.139694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.656 [2024-10-07 14:50:54.153531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.656 [2024-10-07 14:50:54.153735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.656 [2024-10-07 14:50:54.153755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.656 [2024-10-07 14:50:54.167600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.656 [2024-10-07 14:50:54.167805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.656 [2024-10-07 14:50:54.167833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.656 [2024-10-07 14:50:54.181705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.656 [2024-10-07 14:50:54.181910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.656 [2024-10-07 14:50:54.181930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.656 [2024-10-07 14:50:54.195780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.656 [2024-10-07 14:50:54.195983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.656 [2024-10-07 14:50:54.196010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.656 [2024-10-07 14:50:54.209835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.656 [2024-10-07 14:50:54.210048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.656 [2024-10-07 14:50:54.210068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.656 [2024-10-07 14:50:54.223896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.656 [2024-10-07 14:50:54.224107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.656 [2024-10-07 14:50:54.224127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.656 [2024-10-07 14:50:54.237967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.656 [2024-10-07 14:50:54.238178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.656 [2024-10-07 14:50:54.238198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.656 [2024-10-07 14:50:54.252042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.656 [2024-10-07 14:50:54.252245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.656 [2024-10-07 14:50:54.252265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.656 [2024-10-07 14:50:54.266120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.656 [2024-10-07 14:50:54.266324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.656 [2024-10-07 14:50:54.266344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.656 [2024-10-07 14:50:54.280192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.656 [2024-10-07 14:50:54.280397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.656 [2024-10-07 14:50:54.280418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.656 [2024-10-07 14:50:54.294247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.656 [2024-10-07 14:50:54.294453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.656 [2024-10-07 14:50:54.294473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.656 [2024-10-07 14:50:54.308316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.656 [2024-10-07 14:50:54.308521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.656 [2024-10-07 14:50:54.308541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.656 [2024-10-07 14:50:54.322404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.656 [2024-10-07 14:50:54.322607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.656 [2024-10-07 14:50:54.322630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.656 [2024-10-07 14:50:54.336484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.656 [2024-10-07 14:50:54.336687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.656 [2024-10-07 14:50:54.336708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.656 [2024-10-07 14:50:54.350545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.656 [2024-10-07 14:50:54.350749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.656 [2024-10-07 14:50:54.350769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.916 [2024-10-07 14:50:54.364602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.916 [2024-10-07 14:50:54.364806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.916 [2024-10-07 14:50:54.364827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.916 [2024-10-07 14:50:54.378673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.916 [2024-10-07 14:50:54.378876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.916 [2024-10-07 14:50:54.378896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.916 [2024-10-07 14:50:54.392769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.916 [2024-10-07 14:50:54.392973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.916 [2024-10-07 14:50:54.392993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.916 [2024-10-07 14:50:54.406845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.916 [2024-10-07 14:50:54.407054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.916 [2024-10-07 14:50:54.407075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.917 [2024-10-07 14:50:54.420893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.917 [2024-10-07 14:50:54.421104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.917 [2024-10-07 14:50:54.421124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.917 [2024-10-07 14:50:54.434941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.917 [2024-10-07 14:50:54.435267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.917 [2024-10-07 14:50:54.435288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.917 [2024-10-07 14:50:54.449142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.917 [2024-10-07 14:50:54.449355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.917 [2024-10-07 14:50:54.449375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.917 [2024-10-07 14:50:54.463283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.917 [2024-10-07 14:50:54.463486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.917 [2024-10-07 14:50:54.463506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.917 [2024-10-07 14:50:54.477372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.917 [2024-10-07 14:50:54.477576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.917 [2024-10-07 14:50:54.477596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.917 [2024-10-07 14:50:54.491459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.917 [2024-10-07 14:50:54.491662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.917 [2024-10-07 14:50:54.491683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.917 [2024-10-07 14:50:54.505537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.917 [2024-10-07 14:50:54.505740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.917 [2024-10-07 14:50:54.505761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.917 [2024-10-07 14:50:54.519598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.917 [2024-10-07 14:50:54.519800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.917 [2024-10-07 14:50:54.519821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.917 [2024-10-07 14:50:54.533681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.917 [2024-10-07 14:50:54.533884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.917 [2024-10-07 14:50:54.533905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.917 17975.00 IOPS, 70.21 MiB/s [2024-10-07T12:50:54.626Z] [2024-10-07 14:50:54.547714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.917 [2024-10-07 14:50:54.547918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.917 [2024-10-07 14:50:54.547938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.917 [2024-10-07 14:50:54.561828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.917 [2024-10-07 14:50:54.562041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.917 [2024-10-07 14:50:54.562061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.917 [2024-10-07 14:50:54.575893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.917 [2024-10-07 14:50:54.576104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.917 [2024-10-07 14:50:54.576125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.917 [2024-10-07 14:50:54.589990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.917 [2024-10-07 14:50:54.590201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.917 [2024-10-07 14:50:54.590221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.917 [2024-10-07 14:50:54.604037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.917 [2024-10-07 14:50:54.604242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.917 [2024-10-07 14:50:54.604262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:30.917 [2024-10-07 14:50:54.618150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:30.917 [2024-10-07 14:50:54.618351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:30.917 [2024-10-07 14:50:54.618371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.177 [2024-10-07 14:50:54.632209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.177 [2024-10-07 14:50:54.632412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.177 [2024-10-07 14:50:54.632433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.177 [2024-10-07 14:50:54.646303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.177 [2024-10-07 14:50:54.646505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.177 [2024-10-07 14:50:54.646525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.177 [2024-10-07 14:50:54.660380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.177 [2024-10-07 14:50:54.660583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.177 [2024-10-07 14:50:54.660603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.177 [2024-10-07 14:50:54.674438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.177 [2024-10-07 14:50:54.674642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.177 [2024-10-07 14:50:54.674662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.177 [2024-10-07 14:50:54.688512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.177 [2024-10-07 14:50:54.688717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.177 [2024-10-07 14:50:54.688740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.177 [2024-10-07 14:50:54.702605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.177 [2024-10-07 14:50:54.702809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.177 [2024-10-07 14:50:54.702830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.177 [2024-10-07 14:50:54.716676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.177 [2024-10-07 14:50:54.716880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.177 [2024-10-07 14:50:54.716900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.177 [2024-10-07 14:50:54.730743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.177 [2024-10-07 14:50:54.730948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.177 [2024-10-07 14:50:54.730968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.177 [2024-10-07 14:50:54.744819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.177 [2024-10-07 14:50:54.745021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.177 [2024-10-07 14:50:54.745042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.177 [2024-10-07 14:50:54.758880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.177 [2024-10-07 14:50:54.759089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.177 [2024-10-07 14:50:54.759110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.177 [2024-10-07 14:50:54.772989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.177 [2024-10-07 14:50:54.773198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.177 [2024-10-07 14:50:54.773218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.177 [2024-10-07 14:50:54.787045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.177 [2024-10-07 14:50:54.787252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.177 [2024-10-07 14:50:54.787272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.177 [2024-10-07 14:50:54.801110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.177 [2024-10-07 14:50:54.801315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.177 [2024-10-07 14:50:54.801336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.177 [2024-10-07 14:50:54.815384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.177 [2024-10-07 14:50:54.815590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.178 [2024-10-07 14:50:54.815610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.178 [2024-10-07 14:50:54.829466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.178 [2024-10-07 14:50:54.829671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.178 [2024-10-07 14:50:54.829691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.178 [2024-10-07 14:50:54.843562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.178 [2024-10-07 14:50:54.843766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.178 [2024-10-07 14:50:54.843786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.178 [2024-10-07 14:50:54.857641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.178 [2024-10-07 14:50:54.857845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.178 [2024-10-07 14:50:54.857866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.178 [2024-10-07 14:50:54.871737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.178 [2024-10-07 14:50:54.871941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.178 [2024-10-07 14:50:54.871961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.178 [2024-10-07 14:50:54.885828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.438 [2024-10-07 14:50:54.886041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.438 [2024-10-07 14:50:54.886062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.438 [2024-10-07 14:50:54.899900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.438 [2024-10-07 14:50:54.900113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.438 [2024-10-07 14:50:54.900134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.438 [2024-10-07 14:50:54.914011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.438 [2024-10-07 14:50:54.914216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.438 [2024-10-07 14:50:54.914237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.438 [2024-10-07 14:50:54.928083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.438 [2024-10-07 14:50:54.928287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.438 [2024-10-07 14:50:54.928310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.438 [2024-10-07 14:50:54.942175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.438 [2024-10-07 14:50:54.942379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.438 [2024-10-07 14:50:54.942399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.438 [2024-10-07 14:50:54.956238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.438 [2024-10-07 14:50:54.956441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.438 [2024-10-07 14:50:54.956461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.438 [2024-10-07 14:50:54.970363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.438 [2024-10-07 14:50:54.970566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.438 [2024-10-07 14:50:54.970587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.438 [2024-10-07 14:50:54.984526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.438 [2024-10-07 14:50:54.984729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.438 [2024-10-07 14:50:54.984750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.438 [2024-10-07 14:50:54.998615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.438 [2024-10-07 14:50:54.998818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.438 [2024-10-07 14:50:54.998838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.438 [2024-10-07 14:50:55.012724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.438 [2024-10-07 14:50:55.012927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.438 [2024-10-07 14:50:55.012947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.438 [2024-10-07 14:50:55.026832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.438 [2024-10-07 14:50:55.027043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.438 [2024-10-07 14:50:55.027064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.438 [2024-10-07 14:50:55.040955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.438 [2024-10-07 14:50:55.041166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.438 [2024-10-07 14:50:55.041186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.439 [2024-10-07 14:50:55.055061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.439 [2024-10-07 14:50:55.055271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.439 [2024-10-07 14:50:55.055291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.439 [2024-10-07 14:50:55.069173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.439 [2024-10-07 14:50:55.069378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.439 [2024-10-07 14:50:55.069406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.439 [2024-10-07 14:50:55.083250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.439 [2024-10-07 14:50:55.083454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.439 [2024-10-07 14:50:55.083474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.439 [2024-10-07 14:50:55.097343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.439 [2024-10-07 14:50:55.097546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.439 [2024-10-07 14:50:55.097566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.439 [2024-10-07 14:50:55.111466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.439 [2024-10-07 14:50:55.111671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.439 [2024-10-07 14:50:55.111691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.439 [2024-10-07 14:50:55.125572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.439 [2024-10-07 14:50:55.125777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.439 [2024-10-07 14:50:55.125797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.439 [2024-10-07 14:50:55.139647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.439 [2024-10-07 14:50:55.139851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.439 [2024-10-07 14:50:55.139872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.699 [2024-10-07 14:50:55.153735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.699 [2024-10-07 14:50:55.153938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.699 [2024-10-07 14:50:55.153959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.699 [2024-10-07 14:50:55.167837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.699 [2024-10-07 14:50:55.168049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.699 [2024-10-07 14:50:55.168069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.699 [2024-10-07 14:50:55.181891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.699 [2024-10-07 14:50:55.182101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.699 [2024-10-07 14:50:55.182122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.699 [2024-10-07 14:50:55.195986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.699 [2024-10-07 14:50:55.196196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.699 [2024-10-07 14:50:55.196216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.699 [2024-10-07 14:50:55.210031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.699 [2024-10-07 14:50:55.210237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.699 [2024-10-07 14:50:55.210257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.699 [2024-10-07 14:50:55.224145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.699 [2024-10-07 14:50:55.224347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.699 [2024-10-07 14:50:55.224367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.700 [2024-10-07 14:50:55.238226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.700 [2024-10-07 14:50:55.238431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.700 [2024-10-07 14:50:55.238451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.700 [2024-10-07 14:50:55.252318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.700 [2024-10-07 14:50:55.252520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.700 [2024-10-07 14:50:55.252541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.700 [2024-10-07 14:50:55.266381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.700 [2024-10-07 14:50:55.266586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.700 [2024-10-07 14:50:55.266606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.700 [2024-10-07 14:50:55.280448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.700 [2024-10-07 14:50:55.280652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.700 [2024-10-07 14:50:55.280672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.700 [2024-10-07 14:50:55.294521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.700 [2024-10-07 14:50:55.294723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.700 [2024-10-07 14:50:55.294747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.700 [2024-10-07 14:50:55.308612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.700 [2024-10-07 14:50:55.308815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.700 [2024-10-07 14:50:55.308835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.700 [2024-10-07 14:50:55.322700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.700 [2024-10-07 14:50:55.322904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.700 [2024-10-07 14:50:55.322925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.700 [2024-10-07 14:50:55.336774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.700 [2024-10-07 14:50:55.336977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.700 [2024-10-07 14:50:55.336997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.700 [2024-10-07 14:50:55.350852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.700 [2024-10-07 14:50:55.351064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.700 [2024-10-07 14:50:55.351084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.700 [2024-10-07 14:50:55.364925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.700 [2024-10-07 14:50:55.365136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.700 [2024-10-07 14:50:55.365157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.700 [2024-10-07 14:50:55.379024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.700 [2024-10-07 14:50:55.379228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.700 [2024-10-07 14:50:55.379248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.700 [2024-10-07 14:50:55.393124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.700 [2024-10-07 14:50:55.393332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.700 [2024-10-07 14:50:55.393351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.700 [2024-10-07 14:50:55.407196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.700 [2024-10-07 14:50:55.407400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.700 [2024-10-07 14:50:55.407420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.961 [2024-10-07 14:50:55.421254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.961 [2024-10-07 14:50:55.421462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.961 [2024-10-07 14:50:55.421482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.961 [2024-10-07 14:50:55.435340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.961 [2024-10-07 14:50:55.435545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.961 [2024-10-07 14:50:55.435565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.961 [2024-10-07 14:50:55.449546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.961 [2024-10-07 14:50:55.449751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.961 [2024-10-07 14:50:55.449771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.961 [2024-10-07 14:50:55.463646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.961 [2024-10-07 14:50:55.463849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.961 [2024-10-07 14:50:55.463869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.961 [2024-10-07 14:50:55.477735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.961 [2024-10-07 14:50:55.477938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.961 [2024-10-07 14:50:55.477958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.961 [2024-10-07 14:50:55.491809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.961 [2024-10-07 14:50:55.492017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.961 [2024-10-07 14:50:55.492038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.961 [2024-10-07 14:50:55.505875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.961 [2024-10-07 14:50:55.506088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.961 [2024-10-07 14:50:55.506108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.961 [2024-10-07 14:50:55.519957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.961 [2024-10-07 14:50:55.520168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.961 [2024-10-07 14:50:55.520188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.961 [2024-10-07 14:50:55.534066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.961 [2024-10-07 14:50:55.534271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.961 [2024-10-07 14:50:55.534295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.961 18056.00 IOPS, 70.53 MiB/s [2024-10-07T12:50:55.670Z] [2024-10-07 14:50:55.548116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:40:31.961 [2024-10-07 14:50:55.548322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:31.961 [2024-10-07 14:50:55.548342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:40:31.961 00:40:31.961 Latency(us) 00:40:31.961 [2024-10-07T12:50:55.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:31.961 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:31.961 nvme0n1 : 2.01 18058.88 70.54 0.00 0.00 7072.93 6034.77 16056.32 00:40:31.961 [2024-10-07T12:50:55.670Z] =================================================================================================================== 00:40:31.961 [2024-10-07T12:50:55.670Z] Total : 18058.88 70.54 0.00 0.00 7072.93 6034.77 16056.32 00:40:31.961 { 00:40:31.961 "results": [ 00:40:31.961 { 00:40:31.961 "job": "nvme0n1", 00:40:31.961 "core_mask": "0x2", 00:40:31.961 "workload": "randwrite", 00:40:31.961 "status": "finished", 00:40:31.961 "queue_depth": 128, 00:40:31.961 "io_size": 4096, 00:40:31.961 "runtime": 2.006769, 00:40:31.961 "iops": 18058.879721582303, 00:40:31.961 "mibps": 70.54249891243087, 00:40:31.961 "io_failed": 0, 00:40:31.961 "io_timeout": 0, 00:40:31.961 "avg_latency_us": 7072.926610743194, 00:40:31.961 "min_latency_us": 6034.7733333333335, 00:40:31.961 "max_latency_us": 16056.32 00:40:31.961 } 00:40:31.961 ], 00:40:31.961 "core_count": 1 00:40:31.961 } 00:40:31.961 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:40:31.961 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:40:31.961 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:40:31.961 | .driver_specific 00:40:31.961 | .nvme_error 00:40:31.961 | .status_code 00:40:31.961 | .command_transient_transport_error' 00:40:31.961 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:40:32.222 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:40:32.222 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3293056 00:40:32.222 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3293056 ']' 00:40:32.222 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3293056 00:40:32.222 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:40:32.222 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:32.222 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3293056 00:40:32.222 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:32.222 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:32.222 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3293056' 00:40:32.222 killing process with pid 3293056 00:40:32.222 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3293056 00:40:32.222 Received shutdown signal, test time was about 2.000000 seconds 00:40:32.222 00:40:32.222 Latency(us) 00:40:32.222 [2024-10-07T12:50:55.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:32.222 [2024-10-07T12:50:55.931Z] =================================================================================================================== 00:40:32.222 [2024-10-07T12:50:55.931Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:32.222 14:50:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3293056 00:40:32.792 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:40:32.792 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:40:32.792 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:40:32.792 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:40:32.792 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:40:32.792 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3293749 00:40:32.792 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3293749 /var/tmp/bperf.sock 00:40:32.792 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3293749 ']' 00:40:32.792 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:40:32.792 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:32.792 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:32.792 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:32.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:32.792 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:32.792 14:50:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:32.792 [2024-10-07 14:50:56.415852] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:40:32.792 [2024-10-07 14:50:56.415960] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293749 ] 00:40:32.792 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:32.792 Zero copy mechanism will not be used. 00:40:33.052 [2024-10-07 14:50:56.542415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:33.052 [2024-10-07 14:50:56.679724] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:33.622 14:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:33.622 14:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:40:33.622 14:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:33.622 14:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:40:33.882 14:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:40:33.882 14:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:33.882 14:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:33.882 14:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:33.882 14:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:33.882 14:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:40:34.143 nvme0n1 00:40:34.143 14:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:40:34.143 14:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:34.143 14:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:34.143 14:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:34.143 14:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:40:34.143 14:50:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:34.143 I/O size of 131072 is greater than zero copy threshold (65536). 00:40:34.143 Zero copy mechanism will not be used. 00:40:34.143 Running I/O for 2 seconds... 00:40:34.143 [2024-10-07 14:50:57.757304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.143 [2024-10-07 14:50:57.757678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.143 [2024-10-07 14:50:57.757714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.143 [2024-10-07 14:50:57.766462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.143 [2024-10-07 14:50:57.766829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.143 [2024-10-07 14:50:57.766857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.143 [2024-10-07 14:50:57.775286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.143 [2024-10-07 14:50:57.775519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.143 [2024-10-07 14:50:57.775541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.143 [2024-10-07 14:50:57.784810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.143 [2024-10-07 14:50:57.785166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.143 [2024-10-07 14:50:57.785189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.143 [2024-10-07 14:50:57.790715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.143 [2024-10-07 14:50:57.790947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.143 [2024-10-07 14:50:57.790968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.143 [2024-10-07 14:50:57.797576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.143 [2024-10-07 14:50:57.797918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.143 [2024-10-07 14:50:57.797940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.143 [2024-10-07 14:50:57.804974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.143 [2024-10-07 14:50:57.805356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.143 [2024-10-07 14:50:57.805378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.143 [2024-10-07 14:50:57.814044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.143 [2024-10-07 14:50:57.814382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.143 [2024-10-07 14:50:57.814404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.143 [2024-10-07 14:50:57.822064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.143 [2024-10-07 14:50:57.822145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.144 [2024-10-07 14:50:57.822165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.144 [2024-10-07 14:50:57.831108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.144 [2024-10-07 14:50:57.831447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.144 [2024-10-07 14:50:57.831468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.144 [2024-10-07 14:50:57.839152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.144 [2024-10-07 14:50:57.839488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.144 [2024-10-07 14:50:57.839511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.144 [2024-10-07 14:50:57.844064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.144 [2024-10-07 14:50:57.844293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.144 [2024-10-07 14:50:57.844315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.144 [2024-10-07 14:50:57.848522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.144 [2024-10-07 14:50:57.848750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.144 [2024-10-07 14:50:57.848771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:57.855183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:57.855410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:57.855432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:57.860521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:57.860867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:57.860893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:57.865398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:57.865625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:57.865647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:57.872645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:57.872979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:57.873005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:57.878310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:57.878538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:57.878558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:57.882889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:57.883122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:57.883143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:57.891339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:57.891693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:57.891715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:57.901166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:57.901516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:57.901538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:57.913718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:57.914096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:57.914118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:57.922896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:57.923129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:57.923150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:57.933561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:57.933899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:57.933921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:57.943603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:57.943830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:57.943850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:57.950406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:57.950635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:57.950656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:57.960028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:57.960452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:57.960474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:57.969096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:57.969467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:57.969489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:57.977929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:57.978278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:57.978300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:57.986315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:57.986643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:57.986665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:57.996890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:57.997235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:57.997257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:58.007517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:58.007865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:58.007891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:58.017768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:58.018132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.405 [2024-10-07 14:50:58.018154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.405 [2024-10-07 14:50:58.026235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.405 [2024-10-07 14:50:58.026453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.406 [2024-10-07 14:50:58.026474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.406 [2024-10-07 14:50:58.033588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.406 [2024-10-07 14:50:58.033942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.406 [2024-10-07 14:50:58.033964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.406 [2024-10-07 14:50:58.040129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.406 [2024-10-07 14:50:58.040358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.406 [2024-10-07 14:50:58.040378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.406 [2024-10-07 14:50:58.047768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.406 [2024-10-07 14:50:58.047977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.406 [2024-10-07 14:50:58.047998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.406 [2024-10-07 14:50:58.055621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.406 [2024-10-07 14:50:58.056087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.406 [2024-10-07 14:50:58.056109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.406 [2024-10-07 14:50:58.064463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.406 [2024-10-07 14:50:58.064710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.406 [2024-10-07 14:50:58.064731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.406 [2024-10-07 14:50:58.073275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.406 [2024-10-07 14:50:58.073481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.406 [2024-10-07 14:50:58.073502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.406 [2024-10-07 14:50:58.079879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.406 [2024-10-07 14:50:58.080085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.406 [2024-10-07 14:50:58.080106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.406 [2024-10-07 14:50:58.086109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.406 [2024-10-07 14:50:58.086317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.406 [2024-10-07 14:50:58.086338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.406 [2024-10-07 14:50:58.090126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.406 [2024-10-07 14:50:58.090333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.406 [2024-10-07 14:50:58.090354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.406 [2024-10-07 14:50:58.096549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.406 [2024-10-07 14:50:58.096759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.406 [2024-10-07 14:50:58.096788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.406 [2024-10-07 14:50:58.103582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.406 [2024-10-07 14:50:58.103913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.406 [2024-10-07 14:50:58.103935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.406 [2024-10-07 14:50:58.109074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.406 [2024-10-07 14:50:58.109304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.406 [2024-10-07 14:50:58.109325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.666 [2024-10-07 14:50:58.113391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.113589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.113610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.119024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.119218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.119239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.123185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.123381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.123402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.127288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.127485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.127506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.131219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.131413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.131433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.136940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.137147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.137168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.141414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.141608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.141630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.147808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.148008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.148029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.151890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.152087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.152108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.155809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.156007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.156028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.159731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.159925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.159945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.163899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.164241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.164263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.171449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.171666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.171687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.180261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.180456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.180477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.190477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.190693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.190714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.199426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.199625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.199646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.206749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.206960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.206981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.216692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.217012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.217034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.226960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.227196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.227217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.237990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.238234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.238255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.247715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.248012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.248034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.257975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.258201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.258223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.269234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.269565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.269586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.279687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.280146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.280168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.290306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.290669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.290691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.299615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.299849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.299870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.308464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.308667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.667 [2024-10-07 14:50:58.308688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.667 [2024-10-07 14:50:58.318524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.667 [2024-10-07 14:50:58.318768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.668 [2024-10-07 14:50:58.318789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.668 [2024-10-07 14:50:58.328373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.668 [2024-10-07 14:50:58.328665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.668 [2024-10-07 14:50:58.328691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.668 [2024-10-07 14:50:58.337521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.668 [2024-10-07 14:50:58.337775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.668 [2024-10-07 14:50:58.337797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.668 [2024-10-07 14:50:58.348325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.668 [2024-10-07 14:50:58.348644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.668 [2024-10-07 14:50:58.348667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.668 [2024-10-07 14:50:58.358764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.668 [2024-10-07 14:50:58.359022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.668 [2024-10-07 14:50:58.359043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.668 [2024-10-07 14:50:58.368430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.668 [2024-10-07 14:50:58.368724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.668 [2024-10-07 14:50:58.368745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.378452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.378711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.378731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.388558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.388837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.388858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.398731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.399089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.399111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.408862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.409128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.409151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.418477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.418784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.418806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.428530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.428815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.428837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.438473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.438706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.438727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.447954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.448446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.448468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.458857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.459182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.459204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.468666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.468964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.468986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.479351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.479744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.479766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.489917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.490213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.490235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.499625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.499895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.499920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.511439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.511812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.511834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.520352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.520709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.520731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.528552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.528881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.528903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.534562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.534897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.534919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.543459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.543740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.543761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.553893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.554177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.554199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.928 [2024-10-07 14:50:58.564113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.928 [2024-10-07 14:50:58.564345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.928 [2024-10-07 14:50:58.564365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.929 [2024-10-07 14:50:58.575112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.929 [2024-10-07 14:50:58.575379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.929 [2024-10-07 14:50:58.575401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.929 [2024-10-07 14:50:58.585692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.929 [2024-10-07 14:50:58.586142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.929 [2024-10-07 14:50:58.586164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:34.929 [2024-10-07 14:50:58.596588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.929 [2024-10-07 14:50:58.596798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.929 [2024-10-07 14:50:58.596819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:34.929 [2024-10-07 14:50:58.606831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.929 [2024-10-07 14:50:58.607164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.929 [2024-10-07 14:50:58.607185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:34.929 [2024-10-07 14:50:58.618149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.929 [2024-10-07 14:50:58.618367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.929 [2024-10-07 14:50:58.618388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:34.929 [2024-10-07 14:50:58.628768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:34.929 [2024-10-07 14:50:58.629009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:34.929 [2024-10-07 14:50:58.629030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.189 [2024-10-07 14:50:58.639550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.189 [2024-10-07 14:50:58.639955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.189 [2024-10-07 14:50:58.639978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.189 [2024-10-07 14:50:58.649810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.189 [2024-10-07 14:50:58.650082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.189 [2024-10-07 14:50:58.650103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.189 [2024-10-07 14:50:58.660608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.189 [2024-10-07 14:50:58.660830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.189 [2024-10-07 14:50:58.660859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.189 [2024-10-07 14:50:58.671485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.189 [2024-10-07 14:50:58.671808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.189 [2024-10-07 14:50:58.671834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.189 [2024-10-07 14:50:58.681975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.189 [2024-10-07 14:50:58.682270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.189 [2024-10-07 14:50:58.682292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.189 [2024-10-07 14:50:58.693451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.189 [2024-10-07 14:50:58.693682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.189 [2024-10-07 14:50:58.693703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.189 [2024-10-07 14:50:58.704063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.189 [2024-10-07 14:50:58.704425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.189 [2024-10-07 14:50:58.704448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.189 [2024-10-07 14:50:58.714675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.189 [2024-10-07 14:50:58.714987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.189 [2024-10-07 14:50:58.715014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.189 [2024-10-07 14:50:58.724642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.189 [2024-10-07 14:50:58.724845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.189 [2024-10-07 14:50:58.724866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.189 [2024-10-07 14:50:58.736161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.189 [2024-10-07 14:50:58.736401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.189 [2024-10-07 14:50:58.736422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.190 3581.00 IOPS, 447.62 MiB/s [2024-10-07T12:50:58.899Z] [2024-10-07 14:50:58.746833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.190 [2024-10-07 14:50:58.747042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.190 [2024-10-07 14:50:58.747063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.190 [2024-10-07 14:50:58.753896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.190 [2024-10-07 14:50:58.754138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.190 [2024-10-07 14:50:58.754159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.190 [2024-10-07 14:50:58.763091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.190 [2024-10-07 14:50:58.763338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.190 [2024-10-07 14:50:58.763359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.190 [2024-10-07 14:50:58.771857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.190 [2024-10-07 14:50:58.772111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.190 [2024-10-07 14:50:58.772133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.190 [2024-10-07 14:50:58.779268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.190 [2024-10-07 14:50:58.779464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.190 [2024-10-07 14:50:58.779485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.190 [2024-10-07 14:50:58.787244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.190 [2024-10-07 14:50:58.787496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.190 [2024-10-07 14:50:58.787518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.190 [2024-10-07 14:50:58.795930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.190 [2024-10-07 14:50:58.796254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.190 [2024-10-07 14:50:58.796276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.190 [2024-10-07 14:50:58.804896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.190 [2024-10-07 14:50:58.805111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.190 [2024-10-07 14:50:58.805132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.190 [2024-10-07 14:50:58.813719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.190 [2024-10-07 14:50:58.813910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.190 [2024-10-07 14:50:58.813931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.190 [2024-10-07 14:50:58.822775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.190 [2024-10-07 14:50:58.823110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.190 [2024-10-07 14:50:58.823131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.190 [2024-10-07 14:50:58.831481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.190 [2024-10-07 14:50:58.831870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.190 [2024-10-07 14:50:58.831896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.190 [2024-10-07 14:50:58.840292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.190 [2024-10-07 14:50:58.840586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.190 [2024-10-07 14:50:58.840608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.190 [2024-10-07 14:50:58.846952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.190 [2024-10-07 14:50:58.847159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.190 [2024-10-07 14:50:58.847180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.190 [2024-10-07 14:50:58.855454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.190 [2024-10-07 14:50:58.855665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.190 [2024-10-07 14:50:58.855687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.190 [2024-10-07 14:50:58.863925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.190 [2024-10-07 14:50:58.864103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.190 [2024-10-07 14:50:58.864123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.190 [2024-10-07 14:50:58.873086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.190 [2024-10-07 14:50:58.873326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.190 [2024-10-07 14:50:58.873346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.190 [2024-10-07 14:50:58.880284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.190 [2024-10-07 14:50:58.880490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.190 [2024-10-07 14:50:58.880511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.190 [2024-10-07 14:50:58.888814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.190 [2024-10-07 14:50:58.889136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.190 [2024-10-07 14:50:58.889157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.450 [2024-10-07 14:50:58.899283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.450 [2024-10-07 14:50:58.899468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.450 [2024-10-07 14:50:58.899489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.450 [2024-10-07 14:50:58.906086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:58.906284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:58.906304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:58.913317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:58.913515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:58.913536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:58.920112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:58.920388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:58.920409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:58.928250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:58.928541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:58.928563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:58.935508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:58.935696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:58.935716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:58.941937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:58.942130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:58.942151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:58.950908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:58.951203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:58.951224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:58.960600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:58.960907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:58.960928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:58.970302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:58.970569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:58.970594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:58.980990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:58.981261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:58.981281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:58.991886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:58.992223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:58.992245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:59.001943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:59.002216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:59.002237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:59.012379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:59.012831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:59.012853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:59.022089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:59.022477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:59.022499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:59.033119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:59.033370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:59.033391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:59.043867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:59.044206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:59.044228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:59.054887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:59.055219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:59.055241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:59.066027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:59.066236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:59.066257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:59.076486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:59.076840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:59.076861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:59.087570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:59.087816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:59.087837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:59.098184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:59.098472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:59.098493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:59.108755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:59.109020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:59.109041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:59.119578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:59.119872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:59.119893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:59.130184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:59.130483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:59.130505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:59.141148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:59.141390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:59.141411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.451 [2024-10-07 14:50:59.151420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.451 [2024-10-07 14:50:59.151657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.451 [2024-10-07 14:50:59.151678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.711 [2024-10-07 14:50:59.161369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.711 [2024-10-07 14:50:59.161672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.711 [2024-10-07 14:50:59.161694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.711 [2024-10-07 14:50:59.169905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.711 [2024-10-07 14:50:59.170274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.711 [2024-10-07 14:50:59.170296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.711 [2024-10-07 14:50:59.177118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.711 [2024-10-07 14:50:59.177303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.711 [2024-10-07 14:50:59.177323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.711 [2024-10-07 14:50:59.183945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.711 [2024-10-07 14:50:59.184324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.711 [2024-10-07 14:50:59.184345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.711 [2024-10-07 14:50:59.189555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.711 [2024-10-07 14:50:59.189643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.711 [2024-10-07 14:50:59.189664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.711 [2024-10-07 14:50:59.196460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.711 [2024-10-07 14:50:59.196647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.711 [2024-10-07 14:50:59.196668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.711 [2024-10-07 14:50:59.204902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.205233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.205255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.211871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.212075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.212096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.216060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.216251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.216272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.219925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.220118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.220140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.223783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.223966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.223987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.227588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.227771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.227800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.231803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.231988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.232015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.237607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.237791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.237812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.241688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.241869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.241890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.245651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.245832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.245853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.249581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.249739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.249759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.253642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.253800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.253820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.258558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.258789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.258810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.263152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.263315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.263336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.267129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.267291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.267312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.271069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.271227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.271247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.276835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.277171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.277192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.282010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.282160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.282180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.286586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.286814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.286834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.292527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.292724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.292745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.296576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.296727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.296747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.300385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.300535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.300555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.304364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.304511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.304531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.309590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.309891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.309912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.317911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.318222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.318244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.327804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.328104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.328124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.336279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.336427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.336447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.345337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.345491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.345512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.354720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.355019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.355041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.362296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.362535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.362558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.367922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.368084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.368105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.375389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.375464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.375484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.383045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.383231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.383252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.392017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.392259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.392281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.398795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.712 [2024-10-07 14:50:59.398864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.712 [2024-10-07 14:50:59.398885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.712 [2024-10-07 14:50:59.404587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.713 [2024-10-07 14:50:59.404669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.713 [2024-10-07 14:50:59.404689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.713 [2024-10-07 14:50:59.410614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.713 [2024-10-07 14:50:59.410816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.713 [2024-10-07 14:50:59.410840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.713 [2024-10-07 14:50:59.417928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.713 [2024-10-07 14:50:59.418230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.713 [2024-10-07 14:50:59.418251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.424840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.424920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.424939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.430570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.430653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.430673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.438155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.438239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.438259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.444253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.444458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.444479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.451850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.451949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.451969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.458412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.458698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.458720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.463244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.463310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.463330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.467095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.467160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.467180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.470994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.471077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.471097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.475513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.475791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.475811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.483406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.483476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.483497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.490964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.491236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.491258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.498321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.498439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.498460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.506061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.506243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.506264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.513570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.513739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.513759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.520520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.520726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.520749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.527299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.527394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.527413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.532450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.532527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.532547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.539654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.974 [2024-10-07 14:50:59.539726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.974 [2024-10-07 14:50:59.539745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.974 [2024-10-07 14:50:59.545175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.975 [2024-10-07 14:50:59.545401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.975 [2024-10-07 14:50:59.545421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.975 [2024-10-07 14:50:59.553840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.975 [2024-10-07 14:50:59.554061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.975 [2024-10-07 14:50:59.554081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.975 [2024-10-07 14:50:59.562408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.975 [2024-10-07 14:50:59.562624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.975 [2024-10-07 14:50:59.562644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.975 [2024-10-07 14:50:59.570599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.975 [2024-10-07 14:50:59.570924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.975 [2024-10-07 14:50:59.570946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.975 [2024-10-07 14:50:59.578108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.975 [2024-10-07 14:50:59.578342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.975 [2024-10-07 14:50:59.578362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.975 [2024-10-07 14:50:59.585541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.975 [2024-10-07 14:50:59.585811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.975 [2024-10-07 14:50:59.585833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.975 [2024-10-07 14:50:59.594373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.975 [2024-10-07 14:50:59.594458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.975 [2024-10-07 14:50:59.594479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.975 [2024-10-07 14:50:59.602951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.975 [2024-10-07 14:50:59.603150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.975 [2024-10-07 14:50:59.603171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.975 [2024-10-07 14:50:59.609005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.975 [2024-10-07 14:50:59.609205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.975 [2024-10-07 14:50:59.609225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.975 [2024-10-07 14:50:59.619125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.975 [2024-10-07 14:50:59.619222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.975 [2024-10-07 14:50:59.619243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.975 [2024-10-07 14:50:59.627303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.975 [2024-10-07 14:50:59.627561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.975 [2024-10-07 14:50:59.627583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.975 [2024-10-07 14:50:59.635186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.975 [2024-10-07 14:50:59.635264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.975 [2024-10-07 14:50:59.635284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.975 [2024-10-07 14:50:59.643588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.975 [2024-10-07 14:50:59.643708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.975 [2024-10-07 14:50:59.643735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.975 [2024-10-07 14:50:59.648758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.975 [2024-10-07 14:50:59.648826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.975 [2024-10-07 14:50:59.648849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:35.975 [2024-10-07 14:50:59.655305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.975 [2024-10-07 14:50:59.655512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.975 [2024-10-07 14:50:59.655533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:35.975 [2024-10-07 14:50:59.661078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.975 [2024-10-07 14:50:59.661186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.975 [2024-10-07 14:50:59.661206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:35.975 [2024-10-07 14:50:59.671422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.975 [2024-10-07 14:50:59.671679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.975 [2024-10-07 14:50:59.671701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:35.975 [2024-10-07 14:50:59.681272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:35.975 [2024-10-07 14:50:59.681567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:35.975 [2024-10-07 14:50:59.681589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:36.236 [2024-10-07 14:50:59.691737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:36.236 [2024-10-07 14:50:59.691866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:36.236 [2024-10-07 14:50:59.691886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:36.236 [2024-10-07 14:50:59.701720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:36.236 [2024-10-07 14:50:59.702038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:36.236 [2024-10-07 14:50:59.702059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:36.236 [2024-10-07 14:50:59.712228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:36.236 [2024-10-07 14:50:59.712578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:36.236 [2024-10-07 14:50:59.712599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:36.236 [2024-10-07 14:50:59.721850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:36.236 [2024-10-07 14:50:59.721939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:36.236 [2024-10-07 14:50:59.721959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:36.236 [2024-10-07 14:50:59.726288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:36.236 [2024-10-07 14:50:59.726360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:36.236 [2024-10-07 14:50:59.726381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:36.236 [2024-10-07 14:50:59.731245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:36.236 [2024-10-07 14:50:59.731578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:36.236 [2024-10-07 14:50:59.731599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:40:36.236 [2024-10-07 14:50:59.735204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:36.236 [2024-10-07 14:50:59.735270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:36.236 [2024-10-07 14:50:59.735290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:40:36.236 [2024-10-07 14:50:59.739031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:36.236 [2024-10-07 14:50:59.739098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:36.236 [2024-10-07 14:50:59.739118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:36.236 [2024-10-07 14:50:59.742870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:40:36.236 [2024-10-07 14:50:59.742934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:36.236 [2024-10-07 14:50:59.742954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:40:36.236 3877.00 IOPS, 484.62 MiB/s 00:40:36.236 Latency(us) 00:40:36.236 [2024-10-07T12:50:59.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:36.236 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:40:36.236 nvme0n1 : 2.00 3879.55 484.94 0.00 0.00 4119.37 1747.63 15073.28 00:40:36.236 [2024-10-07T12:50:59.945Z] =================================================================================================================== 00:40:36.236 [2024-10-07T12:50:59.945Z] Total : 3879.55 484.94 0.00 0.00 4119.37 1747.63 15073.28 00:40:36.236 { 00:40:36.236 "results": [ 00:40:36.236 { 00:40:36.236 "job": "nvme0n1", 00:40:36.236 "core_mask": "0x2", 00:40:36.236 "workload": "randwrite", 00:40:36.236 "status": "finished", 00:40:36.236 "queue_depth": 16, 00:40:36.236 "io_size": 131072, 00:40:36.236 "runtime": 2.004096, 00:40:36.236 "iops": 3879.5546720316793, 00:40:36.236 "mibps": 484.9443340039599, 00:40:36.236 "io_failed": 0, 00:40:36.236 "io_timeout": 0, 00:40:36.236 "avg_latency_us": 4119.372223794212, 00:40:36.236 "min_latency_us": 1747.6266666666668, 00:40:36.236 "max_latency_us": 15073.28 00:40:36.236 } 00:40:36.236 ], 00:40:36.236 "core_count": 1 00:40:36.236 } 00:40:36.236 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:40:36.236 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:40:36.236 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:40:36.236 | .driver_specific 00:40:36.236 | .nvme_error 00:40:36.236 | .status_code 00:40:36.236 | .command_transient_transport_error' 00:40:36.237 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:40:36.497 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 250 > 0 )) 00:40:36.497 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3293749 00:40:36.497 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3293749 ']' 00:40:36.497 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3293749 00:40:36.497 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:40:36.497 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:36.497 14:50:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3293749 00:40:36.497 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:40:36.497 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:40:36.497 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3293749' 00:40:36.497 killing process with pid 3293749 00:40:36.497 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3293749 00:40:36.497 Received shutdown signal, test time was about 2.000000 seconds 00:40:36.497 00:40:36.497 Latency(us) 00:40:36.497 [2024-10-07T12:51:00.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:36.497 [2024-10-07T12:51:00.206Z] =================================================================================================================== 00:40:36.497 [2024-10-07T12:51:00.206Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:36.497 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3293749 00:40:37.068 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3291024 00:40:37.068 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3291024 ']' 00:40:37.068 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3291024 00:40:37.068 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:40:37.068 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:37.068 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3291024 00:40:37.068 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:37.068 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:37.068 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3291024' 00:40:37.068 killing process with pid 3291024 00:40:37.068 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3291024 00:40:37.068 14:51:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3291024 00:40:38.010 00:40:38.010 real 0m19.157s 00:40:38.010 user 0m36.643s 00:40:38.010 sys 0m3.892s 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:40:38.010 ************************************ 00:40:38.010 END TEST nvmf_digest_error 00:40:38.010 ************************************ 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:38.010 rmmod nvme_tcp 00:40:38.010 rmmod nvme_fabrics 00:40:38.010 rmmod nvme_keyring 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 3291024 ']' 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 3291024 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 3291024 ']' 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 3291024 00:40:38.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3291024) - No such process 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 3291024 is not found' 00:40:38.010 Process with pid 3291024 is not found 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:38.010 14:51:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:40.553 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:40.553 00:40:40.553 real 0m48.801s 00:40:40.553 user 1m16.682s 00:40:40.553 sys 0m13.369s 00:40:40.553 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:40.553 14:51:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:40:40.553 ************************************ 00:40:40.553 END TEST nvmf_digest 00:40:40.553 ************************************ 00:40:40.553 14:51:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:40:40.553 14:51:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:40:40.553 14:51:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:40:40.553 14:51:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:40:40.553 14:51:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:40.553 14:51:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:40.553 14:51:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:40:40.553 ************************************ 00:40:40.554 START TEST nvmf_bdevperf 00:40:40.554 ************************************ 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:40:40.554 * Looking for test storage... 00:40:40.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:40:40.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.554 --rc genhtml_branch_coverage=1 00:40:40.554 --rc genhtml_function_coverage=1 00:40:40.554 --rc genhtml_legend=1 00:40:40.554 --rc geninfo_all_blocks=1 00:40:40.554 --rc geninfo_unexecuted_blocks=1 00:40:40.554 00:40:40.554 ' 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:40:40.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.554 --rc genhtml_branch_coverage=1 00:40:40.554 --rc genhtml_function_coverage=1 00:40:40.554 --rc genhtml_legend=1 00:40:40.554 --rc geninfo_all_blocks=1 00:40:40.554 --rc geninfo_unexecuted_blocks=1 00:40:40.554 00:40:40.554 ' 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:40:40.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.554 --rc genhtml_branch_coverage=1 00:40:40.554 --rc genhtml_function_coverage=1 00:40:40.554 --rc genhtml_legend=1 00:40:40.554 --rc geninfo_all_blocks=1 00:40:40.554 --rc geninfo_unexecuted_blocks=1 00:40:40.554 00:40:40.554 ' 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:40:40.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:40.554 --rc genhtml_branch_coverage=1 00:40:40.554 --rc genhtml_function_coverage=1 00:40:40.554 --rc genhtml_legend=1 00:40:40.554 --rc geninfo_all_blocks=1 00:40:40.554 --rc geninfo_unexecuted_blocks=1 00:40:40.554 00:40:40.554 ' 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:40.554 14:51:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:40.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:40.554 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:40.555 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:40.555 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:40.555 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:40.555 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:40.555 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:40.555 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:40.555 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:40.555 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:40:40.555 14:51:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:48.687 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:48.687 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.687 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:48.688 Found net devices under 0000:31:00.0: cvl_0_0 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:48.688 Found net devices under 0000:31:00.1: cvl_0_1 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:48.688 14:51:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:48.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:48.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:40:48.688 00:40:48.688 --- 10.0.0.2 ping statistics --- 00:40:48.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.688 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:48.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:48.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:40:48.688 00:40:48.688 --- 10.0.0.1 ping statistics --- 00:40:48.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.688 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=3299414 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 3299414 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3299414 ']' 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:48.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:48.688 14:51:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:48.688 [2024-10-07 14:51:11.346092] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:40:48.688 [2024-10-07 14:51:11.346201] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:48.688 [2024-10-07 14:51:11.487364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:48.688 [2024-10-07 14:51:11.672672] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:48.688 [2024-10-07 14:51:11.672730] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:48.688 [2024-10-07 14:51:11.672742] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:48.688 [2024-10-07 14:51:11.672754] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:48.688 [2024-10-07 14:51:11.672764] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:48.688 [2024-10-07 14:51:11.674622] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:40:48.688 [2024-10-07 14:51:11.674743] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:48.688 [2024-10-07 14:51:11.674768] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:48.688 [2024-10-07 14:51:12.162273] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:48.688 Malloc0 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:48.688 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.689 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:48.689 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:48.689 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:48.689 [2024-10-07 14:51:12.258481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:48.689 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:48.689 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:40:48.689 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:40:48.689 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:40:48.689 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:40:48.689 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:48.689 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:48.689 { 00:40:48.689 "params": { 00:40:48.689 "name": "Nvme$subsystem", 00:40:48.689 "trtype": "$TEST_TRANSPORT", 00:40:48.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:48.689 "adrfam": "ipv4", 00:40:48.689 "trsvcid": "$NVMF_PORT", 00:40:48.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:48.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:48.689 "hdgst": ${hdgst:-false}, 00:40:48.689 "ddgst": ${ddgst:-false} 00:40:48.689 }, 00:40:48.689 "method": "bdev_nvme_attach_controller" 00:40:48.689 } 00:40:48.689 EOF 00:40:48.689 )") 00:40:48.689 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:40:48.689 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:40:48.689 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:40:48.689 14:51:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:48.689 "params": { 00:40:48.689 "name": "Nvme1", 00:40:48.689 "trtype": "tcp", 00:40:48.689 "traddr": "10.0.0.2", 00:40:48.689 "adrfam": "ipv4", 00:40:48.689 "trsvcid": "4420", 00:40:48.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:48.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:48.689 "hdgst": false, 00:40:48.689 "ddgst": false 00:40:48.689 }, 00:40:48.689 "method": "bdev_nvme_attach_controller" 00:40:48.689 }' 00:40:48.689 [2024-10-07 14:51:12.342684] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:40:48.689 [2024-10-07 14:51:12.342788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3299742 ] 00:40:48.949 [2024-10-07 14:51:12.458678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:48.949 [2024-10-07 14:51:12.638702] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:49.519 Running I/O for 1 seconds... 00:40:50.457 7969.00 IOPS, 31.13 MiB/s 00:40:50.457 Latency(us) 00:40:50.457 [2024-10-07T12:51:14.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:50.457 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:50.457 Verification LBA range: start 0x0 length 0x4000 00:40:50.457 Nvme1n1 : 1.02 8062.38 31.49 0.00 0.00 15807.55 3659.09 14308.69 00:40:50.457 [2024-10-07T12:51:14.166Z] =================================================================================================================== 00:40:50.457 [2024-10-07T12:51:14.166Z] Total : 8062.38 31.49 0.00 0.00 15807.55 3659.09 14308.69 00:40:51.395 14:51:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3300102 00:40:51.395 14:51:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:40:51.395 14:51:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:40:51.395 14:51:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:40:51.395 14:51:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:40:51.395 14:51:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:40:51.395 14:51:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:51.395 14:51:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:51.395 { 00:40:51.395 "params": { 00:40:51.395 "name": "Nvme$subsystem", 00:40:51.395 "trtype": "$TEST_TRANSPORT", 00:40:51.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:51.395 "adrfam": "ipv4", 00:40:51.395 "trsvcid": "$NVMF_PORT", 00:40:51.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:51.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:51.395 "hdgst": ${hdgst:-false}, 00:40:51.395 "ddgst": ${ddgst:-false} 00:40:51.395 }, 00:40:51.395 "method": "bdev_nvme_attach_controller" 00:40:51.395 } 00:40:51.395 EOF 00:40:51.395 )") 00:40:51.395 14:51:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:40:51.395 14:51:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:40:51.395 14:51:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:40:51.395 14:51:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:51.395 "params": { 00:40:51.395 "name": "Nvme1", 00:40:51.395 "trtype": "tcp", 00:40:51.395 "traddr": "10.0.0.2", 00:40:51.395 "adrfam": "ipv4", 00:40:51.395 "trsvcid": "4420", 00:40:51.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:51.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:51.395 "hdgst": false, 00:40:51.395 "ddgst": false 00:40:51.395 }, 00:40:51.395 "method": "bdev_nvme_attach_controller" 00:40:51.395 }' 00:40:51.395 [2024-10-07 14:51:14.910299] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:40:51.396 [2024-10-07 14:51:14.910411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3300102 ] 00:40:51.396 [2024-10-07 14:51:15.023734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:51.656 [2024-10-07 14:51:15.204077] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:52.227 Running I/O for 15 seconds... 00:40:54.104 9910.00 IOPS, 38.71 MiB/s [2024-10-07T12:51:18.075Z] 9957.00 IOPS, 38.89 MiB/s [2024-10-07T12:51:18.075Z] 14:51:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3299414 00:40:54.366 14:51:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:40:54.366 [2024-10-07 14:51:17.851139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:54.366 [2024-10-07 14:51:17.851193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.366 [2024-10-07 14:51:17.851728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.366 [2024-10-07 14:51:17.851742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.851752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.851765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.851775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.851788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.851798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.851811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.851821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.851834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.851844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.851857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.851868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.851880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.851891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.851903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.851913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.851926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.851936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.851949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.851960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.851972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:54.367 [2024-10-07 14:51:17.851983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.851995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:54.367 [2024-10-07 14:51:17.852012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:54.367 [2024-10-07 14:51:17.852037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:54.367 [2024-10-07 14:51:17.852061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:54.367 [2024-10-07 14:51:17.852087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:54.367 [2024-10-07 14:51:17.852109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:54.367 [2024-10-07 14:51:17.852132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.367 [2024-10-07 14:51:17.852726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.367 [2024-10-07 14:51:17.852738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.852748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.852761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.852772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.852784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.852794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.852807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.852819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.852832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.852842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.852854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.852866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.852880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.852890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.852902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.852913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.852926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.852938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.852950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.852961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.852974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.852984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.852996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:43392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:43480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.368 [2024-10-07 14:51:17.853719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.368 [2024-10-07 14:51:17.853729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.853742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.853753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.853766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.853776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.853788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.853798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.853812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.853822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.853837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.853847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.853859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.853870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.853882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.853892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.853905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.853915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.853928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.853938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.853951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.853961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.853973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.853983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.853996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.854011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.854024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.854035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.854047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.854057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.854070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.854081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.854094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.854104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.854116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:43656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.854127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.854141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.854152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.854164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.854175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.854187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.854198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.854210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.854220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.854233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:40:54.369 [2024-10-07 14:51:17.854244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.854256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039f100 is same with the state(6) to be set 00:40:54.369 [2024-10-07 14:51:17.854271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:40:54.369 [2024-10-07 14:51:17.854281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:54.369 [2024-10-07 14:51:17.854293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43704 len:8 PRP1 0x0 PRP2 0x0 00:40:54.369 [2024-10-07 14:51:17.854304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:54.369 [2024-10-07 14:51:17.854515] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500039f100 was disconnected and freed. reset controller. 00:40:54.369 [2024-10-07 14:51:17.858194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.369 [2024-10-07 14:51:17.858272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.369 [2024-10-07 14:51:17.859261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.369 [2024-10-07 14:51:17.859308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.369 [2024-10-07 14:51:17.859331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.369 [2024-10-07 14:51:17.859619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.369 [2024-10-07 14:51:17.859861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.369 [2024-10-07 14:51:17.859874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.369 [2024-10-07 14:51:17.859887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.369 [2024-10-07 14:51:17.863621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.369 [2024-10-07 14:51:17.872648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.369 [2024-10-07 14:51:17.873336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.369 [2024-10-07 14:51:17.873385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.369 [2024-10-07 14:51:17.873401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.369 [2024-10-07 14:51:17.873670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.369 [2024-10-07 14:51:17.873912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.369 [2024-10-07 14:51:17.873926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.369 [2024-10-07 14:51:17.873937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.369 [2024-10-07 14:51:17.877668] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.369 [2024-10-07 14:51:17.886758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.369 [2024-10-07 14:51:17.887440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.369 [2024-10-07 14:51:17.887488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.369 [2024-10-07 14:51:17.887503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.369 [2024-10-07 14:51:17.887770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.369 [2024-10-07 14:51:17.888018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.369 [2024-10-07 14:51:17.888032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.369 [2024-10-07 14:51:17.888043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.369 [2024-10-07 14:51:17.891767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.369 [2024-10-07 14:51:17.900778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.369 [2024-10-07 14:51:17.901391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.369 [2024-10-07 14:51:17.901416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.369 [2024-10-07 14:51:17.901428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.369 [2024-10-07 14:51:17.901664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.369 [2024-10-07 14:51:17.901899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.369 [2024-10-07 14:51:17.901911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.370 [2024-10-07 14:51:17.901921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.370 [2024-10-07 14:51:17.905643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.370 [2024-10-07 14:51:17.914868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.370 [2024-10-07 14:51:17.915437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.370 [2024-10-07 14:51:17.915461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.370 [2024-10-07 14:51:17.915472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.370 [2024-10-07 14:51:17.915711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.370 [2024-10-07 14:51:17.915946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.370 [2024-10-07 14:51:17.915958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.370 [2024-10-07 14:51:17.915968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.370 [2024-10-07 14:51:17.919690] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.370 [2024-10-07 14:51:17.928904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.370 [2024-10-07 14:51:17.929556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.370 [2024-10-07 14:51:17.929604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.370 [2024-10-07 14:51:17.929620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.370 [2024-10-07 14:51:17.929887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.370 [2024-10-07 14:51:17.930134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.370 [2024-10-07 14:51:17.930148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.370 [2024-10-07 14:51:17.930159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.370 [2024-10-07 14:51:17.933897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.370 [2024-10-07 14:51:17.942912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.370 [2024-10-07 14:51:17.943597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.370 [2024-10-07 14:51:17.943645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.370 [2024-10-07 14:51:17.943661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.370 [2024-10-07 14:51:17.943927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.370 [2024-10-07 14:51:17.944178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.370 [2024-10-07 14:51:17.944194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.370 [2024-10-07 14:51:17.944204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.370 [2024-10-07 14:51:17.947929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.370 [2024-10-07 14:51:17.956978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.370 [2024-10-07 14:51:17.957686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.370 [2024-10-07 14:51:17.957733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.370 [2024-10-07 14:51:17.957749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.370 [2024-10-07 14:51:17.958025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.370 [2024-10-07 14:51:17.958266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.370 [2024-10-07 14:51:17.958279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.370 [2024-10-07 14:51:17.958295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.370 [2024-10-07 14:51:17.962039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.370 [2024-10-07 14:51:17.971054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.370 [2024-10-07 14:51:17.971762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.370 [2024-10-07 14:51:17.971810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.370 [2024-10-07 14:51:17.971825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.370 [2024-10-07 14:51:17.972101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.370 [2024-10-07 14:51:17.972343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.370 [2024-10-07 14:51:17.972356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.370 [2024-10-07 14:51:17.972367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.370 [2024-10-07 14:51:17.976097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.370 [2024-10-07 14:51:17.985107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.370 [2024-10-07 14:51:17.985664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.370 [2024-10-07 14:51:17.985710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.370 [2024-10-07 14:51:17.985725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.370 [2024-10-07 14:51:17.985992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.370 [2024-10-07 14:51:17.986253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.370 [2024-10-07 14:51:17.986268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.370 [2024-10-07 14:51:17.986279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.370 [2024-10-07 14:51:17.990007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.370 [2024-10-07 14:51:17.999238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.370 [2024-10-07 14:51:17.999964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.370 [2024-10-07 14:51:18.000019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.370 [2024-10-07 14:51:18.000037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.370 [2024-10-07 14:51:18.000309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.370 [2024-10-07 14:51:18.000550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.370 [2024-10-07 14:51:18.000563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.370 [2024-10-07 14:51:18.000574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.370 [2024-10-07 14:51:18.004303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.370 [2024-10-07 14:51:18.013314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.370 [2024-10-07 14:51:18.014030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.370 [2024-10-07 14:51:18.014078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.370 [2024-10-07 14:51:18.014095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.370 [2024-10-07 14:51:18.014362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.370 [2024-10-07 14:51:18.014601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.370 [2024-10-07 14:51:18.014615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.370 [2024-10-07 14:51:18.014626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.370 [2024-10-07 14:51:18.018357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.370 [2024-10-07 14:51:18.027368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.370 [2024-10-07 14:51:18.028101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.370 [2024-10-07 14:51:18.028148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.370 [2024-10-07 14:51:18.028165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.371 [2024-10-07 14:51:18.028432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.371 [2024-10-07 14:51:18.028673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.371 [2024-10-07 14:51:18.028687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.371 [2024-10-07 14:51:18.028697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.371 [2024-10-07 14:51:18.032441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.371 [2024-10-07 14:51:18.041455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.371 [2024-10-07 14:51:18.042104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.371 [2024-10-07 14:51:18.042153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.371 [2024-10-07 14:51:18.042169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.371 [2024-10-07 14:51:18.042436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.371 [2024-10-07 14:51:18.042676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.371 [2024-10-07 14:51:18.042689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.371 [2024-10-07 14:51:18.042700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.371 [2024-10-07 14:51:18.046431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.371 [2024-10-07 14:51:18.055661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.371 [2024-10-07 14:51:18.056338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.371 [2024-10-07 14:51:18.056385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.371 [2024-10-07 14:51:18.056401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.371 [2024-10-07 14:51:18.056672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.371 [2024-10-07 14:51:18.056913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.371 [2024-10-07 14:51:18.056927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.371 [2024-10-07 14:51:18.056938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.371 [2024-10-07 14:51:18.060692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.371 [2024-10-07 14:51:18.069708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.371 [2024-10-07 14:51:18.070407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.371 [2024-10-07 14:51:18.070455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.371 [2024-10-07 14:51:18.070471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.371 [2024-10-07 14:51:18.070738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.371 [2024-10-07 14:51:18.070978] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.371 [2024-10-07 14:51:18.070992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.371 [2024-10-07 14:51:18.071013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.632 [2024-10-07 14:51:18.074733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.632 [2024-10-07 14:51:18.083737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.632 [2024-10-07 14:51:18.084390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.632 [2024-10-07 14:51:18.084438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.632 [2024-10-07 14:51:18.084455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.632 [2024-10-07 14:51:18.084722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.632 [2024-10-07 14:51:18.084962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.632 [2024-10-07 14:51:18.084976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.632 [2024-10-07 14:51:18.084989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.632 [2024-10-07 14:51:18.088722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.632 [2024-10-07 14:51:18.097730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.632 [2024-10-07 14:51:18.098387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.632 [2024-10-07 14:51:18.098435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.632 [2024-10-07 14:51:18.098451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.632 [2024-10-07 14:51:18.098718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.632 [2024-10-07 14:51:18.098958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.632 [2024-10-07 14:51:18.098972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.632 [2024-10-07 14:51:18.098989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.632 [2024-10-07 14:51:18.102725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.632 [2024-10-07 14:51:18.111740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.632 [2024-10-07 14:51:18.112462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.632 [2024-10-07 14:51:18.112509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.632 [2024-10-07 14:51:18.112525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.632 [2024-10-07 14:51:18.112791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.632 [2024-10-07 14:51:18.113042] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.632 [2024-10-07 14:51:18.113057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.632 [2024-10-07 14:51:18.113068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.632 [2024-10-07 14:51:18.116794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.632 [2024-10-07 14:51:18.125811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.632 [2024-10-07 14:51:18.126520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.632 [2024-10-07 14:51:18.126568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.632 [2024-10-07 14:51:18.126584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.632 [2024-10-07 14:51:18.126850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.632 [2024-10-07 14:51:18.127100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.632 [2024-10-07 14:51:18.127116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.632 [2024-10-07 14:51:18.127127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.632 [2024-10-07 14:51:18.130850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.632 [2024-10-07 14:51:18.139944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.632 [2024-10-07 14:51:18.140600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.632 [2024-10-07 14:51:18.140627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.632 [2024-10-07 14:51:18.140639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.632 [2024-10-07 14:51:18.140875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.632 [2024-10-07 14:51:18.141117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.632 [2024-10-07 14:51:18.141130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.632 [2024-10-07 14:51:18.141140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.632 [2024-10-07 14:51:18.144858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.632 [2024-10-07 14:51:18.154082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.632 [2024-10-07 14:51:18.154783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.632 [2024-10-07 14:51:18.154830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.632 [2024-10-07 14:51:18.154846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.632 [2024-10-07 14:51:18.155124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.632 [2024-10-07 14:51:18.155365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.632 [2024-10-07 14:51:18.155378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.632 [2024-10-07 14:51:18.155389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.632 [2024-10-07 14:51:18.159111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.632 [2024-10-07 14:51:18.168137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.632 [2024-10-07 14:51:18.168807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.632 [2024-10-07 14:51:18.168861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.632 [2024-10-07 14:51:18.168877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.632 [2024-10-07 14:51:18.169156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.633 [2024-10-07 14:51:18.169397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.633 [2024-10-07 14:51:18.169410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.633 [2024-10-07 14:51:18.169421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.633 [2024-10-07 14:51:18.173145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.633 [2024-10-07 14:51:18.182157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.633 [2024-10-07 14:51:18.182732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.633 [2024-10-07 14:51:18.182758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.633 [2024-10-07 14:51:18.182769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.633 [2024-10-07 14:51:18.183014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.633 [2024-10-07 14:51:18.183249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.633 [2024-10-07 14:51:18.183262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.633 [2024-10-07 14:51:18.183272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.633 [2024-10-07 14:51:18.186985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.633 [2024-10-07 14:51:18.196202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.633 [2024-10-07 14:51:18.196788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.633 [2024-10-07 14:51:18.196812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.633 [2024-10-07 14:51:18.196823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.633 [2024-10-07 14:51:18.197069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.633 [2024-10-07 14:51:18.197306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.633 [2024-10-07 14:51:18.197318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.633 [2024-10-07 14:51:18.197328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.633 [2024-10-07 14:51:18.201048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.633 [2024-10-07 14:51:18.210265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.633 [2024-10-07 14:51:18.210842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.633 [2024-10-07 14:51:18.210865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.633 [2024-10-07 14:51:18.210875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.633 [2024-10-07 14:51:18.211117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.633 [2024-10-07 14:51:18.211352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.633 [2024-10-07 14:51:18.211364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.633 [2024-10-07 14:51:18.211374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.633 [2024-10-07 14:51:18.215092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.633 [2024-10-07 14:51:18.224306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.633 [2024-10-07 14:51:18.224906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.633 [2024-10-07 14:51:18.224929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.633 [2024-10-07 14:51:18.224939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.633 [2024-10-07 14:51:18.225179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.633 [2024-10-07 14:51:18.225415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.633 [2024-10-07 14:51:18.225427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.633 [2024-10-07 14:51:18.225436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.633 [2024-10-07 14:51:18.229147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.633 [2024-10-07 14:51:18.238360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.633 [2024-10-07 14:51:18.239041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.633 [2024-10-07 14:51:18.239090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.633 [2024-10-07 14:51:18.239106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.633 [2024-10-07 14:51:18.239373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.633 [2024-10-07 14:51:18.239613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.633 [2024-10-07 14:51:18.239627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.633 [2024-10-07 14:51:18.239643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.633 [2024-10-07 14:51:18.243379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.633 [2024-10-07 14:51:18.252400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.633 [2024-10-07 14:51:18.253138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.633 [2024-10-07 14:51:18.253185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.633 [2024-10-07 14:51:18.253201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.633 [2024-10-07 14:51:18.253469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.633 [2024-10-07 14:51:18.253709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.633 [2024-10-07 14:51:18.253722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.633 [2024-10-07 14:51:18.253733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.633 [2024-10-07 14:51:18.257465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.633 [2024-10-07 14:51:18.266501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.633 [2024-10-07 14:51:18.267221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.633 [2024-10-07 14:51:18.267269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.633 [2024-10-07 14:51:18.267286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.633 [2024-10-07 14:51:18.267553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.633 [2024-10-07 14:51:18.267794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.633 [2024-10-07 14:51:18.267807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.633 [2024-10-07 14:51:18.267818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.633 [2024-10-07 14:51:18.271552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.633 [2024-10-07 14:51:18.280558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.633 [2024-10-07 14:51:18.281273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.633 [2024-10-07 14:51:18.281320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.633 [2024-10-07 14:51:18.281336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.633 [2024-10-07 14:51:18.281603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.633 [2024-10-07 14:51:18.281843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.633 [2024-10-07 14:51:18.281857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.633 [2024-10-07 14:51:18.281868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.633 [2024-10-07 14:51:18.285600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.633 [2024-10-07 14:51:18.294611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.633 [2024-10-07 14:51:18.295318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.633 [2024-10-07 14:51:18.295365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.633 [2024-10-07 14:51:18.295381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.633 [2024-10-07 14:51:18.295648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.633 [2024-10-07 14:51:18.295888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.633 [2024-10-07 14:51:18.295901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.633 [2024-10-07 14:51:18.295912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.633 [2024-10-07 14:51:18.299646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.633 [2024-10-07 14:51:18.308658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.633 [2024-10-07 14:51:18.309273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.633 [2024-10-07 14:51:18.309319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.633 [2024-10-07 14:51:18.309334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.633 [2024-10-07 14:51:18.309601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.633 [2024-10-07 14:51:18.309842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.633 [2024-10-07 14:51:18.309856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.633 [2024-10-07 14:51:18.309866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.633 [2024-10-07 14:51:18.313599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.633 [2024-10-07 14:51:18.322819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.633 [2024-10-07 14:51:18.323545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.633 [2024-10-07 14:51:18.323593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.634 [2024-10-07 14:51:18.323609] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.634 [2024-10-07 14:51:18.323875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.634 [2024-10-07 14:51:18.324128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.634 [2024-10-07 14:51:18.324142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.634 [2024-10-07 14:51:18.324153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.634 [2024-10-07 14:51:18.327875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.634 [2024-10-07 14:51:18.336900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.634 [2024-10-07 14:51:18.337603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.634 [2024-10-07 14:51:18.337654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.634 [2024-10-07 14:51:18.337670] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.634 [2024-10-07 14:51:18.337941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.634 [2024-10-07 14:51:18.338193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.634 [2024-10-07 14:51:18.338208] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.634 [2024-10-07 14:51:18.338219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.896 [2024-10-07 14:51:18.341945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.896 [2024-10-07 14:51:18.350957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.896 [2024-10-07 14:51:18.351678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.896 [2024-10-07 14:51:18.351726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.896 [2024-10-07 14:51:18.351741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.896 [2024-10-07 14:51:18.352019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.896 [2024-10-07 14:51:18.352261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.896 [2024-10-07 14:51:18.352275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.896 [2024-10-07 14:51:18.352286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.896 [2024-10-07 14:51:18.356008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.896 [2024-10-07 14:51:18.365059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.896 [2024-10-07 14:51:18.365768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.896 [2024-10-07 14:51:18.365815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.896 [2024-10-07 14:51:18.365832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.896 [2024-10-07 14:51:18.366135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.896 [2024-10-07 14:51:18.366376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.896 [2024-10-07 14:51:18.366390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.896 [2024-10-07 14:51:18.366400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.896 [2024-10-07 14:51:18.370134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.896 [2024-10-07 14:51:18.379148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.896 [2024-10-07 14:51:18.379854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.896 [2024-10-07 14:51:18.379902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.896 [2024-10-07 14:51:18.379917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.896 [2024-10-07 14:51:18.380193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.896 [2024-10-07 14:51:18.380434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.896 [2024-10-07 14:51:18.380452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.896 [2024-10-07 14:51:18.380463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.896 [2024-10-07 14:51:18.384193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.896 [2024-10-07 14:51:18.393231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.896 [2024-10-07 14:51:18.393849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.896 [2024-10-07 14:51:18.393875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.896 [2024-10-07 14:51:18.393887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.896 [2024-10-07 14:51:18.394128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.896 [2024-10-07 14:51:18.394365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.896 [2024-10-07 14:51:18.394378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.896 [2024-10-07 14:51:18.394388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.896 [2024-10-07 14:51:18.398121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.896 [2024-10-07 14:51:18.407354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.896 [2024-10-07 14:51:18.407919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.896 [2024-10-07 14:51:18.407942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.896 [2024-10-07 14:51:18.407953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.896 [2024-10-07 14:51:18.408194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.896 [2024-10-07 14:51:18.408428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.896 [2024-10-07 14:51:18.408441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.896 [2024-10-07 14:51:18.408450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.896 [2024-10-07 14:51:18.412177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.896 [2024-10-07 14:51:18.421410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.896 [2024-10-07 14:51:18.421991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.896 [2024-10-07 14:51:18.422020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.896 [2024-10-07 14:51:18.422031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.896 [2024-10-07 14:51:18.422265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.896 [2024-10-07 14:51:18.422500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.896 [2024-10-07 14:51:18.422512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.896 [2024-10-07 14:51:18.422522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.896 [2024-10-07 14:51:18.426247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.896 [2024-10-07 14:51:18.435491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.896 [2024-10-07 14:51:18.436054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.896 [2024-10-07 14:51:18.436078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.896 [2024-10-07 14:51:18.436090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.896 [2024-10-07 14:51:18.436325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.896 [2024-10-07 14:51:18.436560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.896 [2024-10-07 14:51:18.436573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.896 [2024-10-07 14:51:18.436582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.896 [2024-10-07 14:51:18.440308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.896 [2024-10-07 14:51:18.449540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.896 [2024-10-07 14:51:18.450111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.896 [2024-10-07 14:51:18.450135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.896 [2024-10-07 14:51:18.450146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.896 [2024-10-07 14:51:18.450380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.896 [2024-10-07 14:51:18.450616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.896 [2024-10-07 14:51:18.450628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.896 [2024-10-07 14:51:18.450637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.896 [2024-10-07 14:51:18.454371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.896 [2024-10-07 14:51:18.463674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.896 [2024-10-07 14:51:18.464280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.896 [2024-10-07 14:51:18.464303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.896 [2024-10-07 14:51:18.464314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.896 [2024-10-07 14:51:18.464549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.896 [2024-10-07 14:51:18.464783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.897 [2024-10-07 14:51:18.464795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.897 [2024-10-07 14:51:18.464804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.897 [2024-10-07 14:51:18.468531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.897 [2024-10-07 14:51:18.477764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.897 [2024-10-07 14:51:18.478476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.897 [2024-10-07 14:51:18.478523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.897 [2024-10-07 14:51:18.478538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.897 [2024-10-07 14:51:18.478810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.897 [2024-10-07 14:51:18.479061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.897 [2024-10-07 14:51:18.479075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.897 [2024-10-07 14:51:18.479086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.897 [2024-10-07 14:51:18.482809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.897 [2024-10-07 14:51:18.491829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.897 [2024-10-07 14:51:18.492541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.897 [2024-10-07 14:51:18.492588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.897 [2024-10-07 14:51:18.492603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.897 [2024-10-07 14:51:18.492870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.897 [2024-10-07 14:51:18.493121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.897 [2024-10-07 14:51:18.493136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.897 [2024-10-07 14:51:18.493147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.897 [2024-10-07 14:51:18.496874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.897 [2024-10-07 14:51:18.505895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.897 [2024-10-07 14:51:18.506569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.897 [2024-10-07 14:51:18.506617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.897 [2024-10-07 14:51:18.506633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.897 [2024-10-07 14:51:18.506899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.897 [2024-10-07 14:51:18.507150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.897 [2024-10-07 14:51:18.507165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.897 [2024-10-07 14:51:18.507176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.897 [2024-10-07 14:51:18.510900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.897 [2024-10-07 14:51:18.519923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.897 [2024-10-07 14:51:18.520629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.897 [2024-10-07 14:51:18.520679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.897 [2024-10-07 14:51:18.520695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.897 [2024-10-07 14:51:18.520962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.897 [2024-10-07 14:51:18.521212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.897 [2024-10-07 14:51:18.521232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.897 [2024-10-07 14:51:18.521243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.897 [2024-10-07 14:51:18.524968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.897 [2024-10-07 14:51:18.534025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.897 [2024-10-07 14:51:18.534629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.897 [2024-10-07 14:51:18.534654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.897 [2024-10-07 14:51:18.534665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.897 [2024-10-07 14:51:18.534901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.897 [2024-10-07 14:51:18.535144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.897 [2024-10-07 14:51:18.535158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.897 [2024-10-07 14:51:18.535168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.897 [2024-10-07 14:51:18.538897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.897 [2024-10-07 14:51:18.548130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.897 [2024-10-07 14:51:18.548841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.897 [2024-10-07 14:51:18.548888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.897 [2024-10-07 14:51:18.548903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.897 [2024-10-07 14:51:18.549180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.897 [2024-10-07 14:51:18.549422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.897 [2024-10-07 14:51:18.549436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.897 [2024-10-07 14:51:18.549447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.897 [2024-10-07 14:51:18.553177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.897 [2024-10-07 14:51:18.562224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.897 [2024-10-07 14:51:18.562919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.897 [2024-10-07 14:51:18.562966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.897 [2024-10-07 14:51:18.562983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.897 [2024-10-07 14:51:18.563260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.897 [2024-10-07 14:51:18.563501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.897 [2024-10-07 14:51:18.563523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.897 [2024-10-07 14:51:18.563534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.897 [2024-10-07 14:51:18.567266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.897 [2024-10-07 14:51:18.576288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.897 [2024-10-07 14:51:18.576872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.897 [2024-10-07 14:51:18.576897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.897 [2024-10-07 14:51:18.576908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.897 [2024-10-07 14:51:18.577150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.897 [2024-10-07 14:51:18.577386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.897 [2024-10-07 14:51:18.577399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.897 [2024-10-07 14:51:18.577409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.897 [2024-10-07 14:51:18.581135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:54.897 [2024-10-07 14:51:18.590372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:54.897 [2024-10-07 14:51:18.590974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:54.897 [2024-10-07 14:51:18.590998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:54.897 [2024-10-07 14:51:18.591015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:54.897 [2024-10-07 14:51:18.591250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:54.897 [2024-10-07 14:51:18.591485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:54.897 [2024-10-07 14:51:18.591496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:54.897 [2024-10-07 14:51:18.591506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:54.897 [2024-10-07 14:51:18.595234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.160 [2024-10-07 14:51:18.604466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.160 [2024-10-07 14:51:18.605028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.160 [2024-10-07 14:51:18.605052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.160 [2024-10-07 14:51:18.605063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.160 [2024-10-07 14:51:18.605298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.160 [2024-10-07 14:51:18.605532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.160 [2024-10-07 14:51:18.605544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.160 [2024-10-07 14:51:18.605553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.160 [2024-10-07 14:51:18.609282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.160 [2024-10-07 14:51:18.618522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.160 [2024-10-07 14:51:18.619074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.160 [2024-10-07 14:51:18.619106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.160 [2024-10-07 14:51:18.619121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.160 [2024-10-07 14:51:18.619356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.160 [2024-10-07 14:51:18.619591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.160 [2024-10-07 14:51:18.619604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.160 [2024-10-07 14:51:18.619615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.160 [2024-10-07 14:51:18.623348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.160 [2024-10-07 14:51:18.632591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.160 [2024-10-07 14:51:18.633211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.160 [2024-10-07 14:51:18.633259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.160 [2024-10-07 14:51:18.633274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.160 [2024-10-07 14:51:18.633542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.160 [2024-10-07 14:51:18.633782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.161 [2024-10-07 14:51:18.633796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.161 [2024-10-07 14:51:18.633807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.161 [2024-10-07 14:51:18.637542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.161 [2024-10-07 14:51:18.646791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.161 [2024-10-07 14:51:18.647528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.161 [2024-10-07 14:51:18.647575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.161 [2024-10-07 14:51:18.647591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.161 [2024-10-07 14:51:18.647859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.161 [2024-10-07 14:51:18.648107] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.161 [2024-10-07 14:51:18.648121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.161 [2024-10-07 14:51:18.648132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.161 [2024-10-07 14:51:18.651863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.161 [2024-10-07 14:51:18.660892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.161 [2024-10-07 14:51:18.661466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.161 [2024-10-07 14:51:18.661492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.161 [2024-10-07 14:51:18.661504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.161 [2024-10-07 14:51:18.661740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.161 [2024-10-07 14:51:18.661986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.161 [2024-10-07 14:51:18.662012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.161 [2024-10-07 14:51:18.662023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.161 [2024-10-07 14:51:18.667458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.161 7240.00 IOPS, 28.28 MiB/s [2024-10-07T12:51:18.870Z] [2024-10-07 14:51:18.674951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.161 [2024-10-07 14:51:18.675554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.161 [2024-10-07 14:51:18.675578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.161 [2024-10-07 14:51:18.675589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.161 [2024-10-07 14:51:18.675824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.161 [2024-10-07 14:51:18.676066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.161 [2024-10-07 14:51:18.676080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.161 [2024-10-07 14:51:18.676090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.161 [2024-10-07 14:51:18.679810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.161 [2024-10-07 14:51:18.689046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.161 [2024-10-07 14:51:18.689641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.161 [2024-10-07 14:51:18.689664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.161 [2024-10-07 14:51:18.689675] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.161 [2024-10-07 14:51:18.689909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.161 [2024-10-07 14:51:18.690152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.161 [2024-10-07 14:51:18.690165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.161 [2024-10-07 14:51:18.690175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.161 [2024-10-07 14:51:18.693896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.161 [2024-10-07 14:51:18.703127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.161 [2024-10-07 14:51:18.703823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.161 [2024-10-07 14:51:18.703871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.161 [2024-10-07 14:51:18.703887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.161 [2024-10-07 14:51:18.704165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.161 [2024-10-07 14:51:18.704406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.161 [2024-10-07 14:51:18.704419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.161 [2024-10-07 14:51:18.704430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.161 [2024-10-07 14:51:18.708165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.161 [2024-10-07 14:51:18.717207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.161 [2024-10-07 14:51:18.717815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.161 [2024-10-07 14:51:18.717861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.161 [2024-10-07 14:51:18.717877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.161 [2024-10-07 14:51:18.718155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.161 [2024-10-07 14:51:18.718396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.161 [2024-10-07 14:51:18.718410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.161 [2024-10-07 14:51:18.718421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.161 [2024-10-07 14:51:18.722149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.161 [2024-10-07 14:51:18.731374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.161 [2024-10-07 14:51:18.731993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.161 [2024-10-07 14:51:18.732034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.161 [2024-10-07 14:51:18.732046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.161 [2024-10-07 14:51:18.732283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.161 [2024-10-07 14:51:18.732519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.161 [2024-10-07 14:51:18.732532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.161 [2024-10-07 14:51:18.732543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.161 [2024-10-07 14:51:18.736263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.161 [2024-10-07 14:51:18.745485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.161 [2024-10-07 14:51:18.746207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.161 [2024-10-07 14:51:18.746256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.161 [2024-10-07 14:51:18.746271] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.161 [2024-10-07 14:51:18.746538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.161 [2024-10-07 14:51:18.746779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.161 [2024-10-07 14:51:18.746792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.161 [2024-10-07 14:51:18.746803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.161 [2024-10-07 14:51:18.750537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.161 [2024-10-07 14:51:18.759571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.161 [2024-10-07 14:51:18.760297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.161 [2024-10-07 14:51:18.760344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.161 [2024-10-07 14:51:18.760365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.161 [2024-10-07 14:51:18.760631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.161 [2024-10-07 14:51:18.760872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.161 [2024-10-07 14:51:18.760885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.161 [2024-10-07 14:51:18.760897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.162 [2024-10-07 14:51:18.764653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.162 [2024-10-07 14:51:18.773664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.162 [2024-10-07 14:51:18.774281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.162 [2024-10-07 14:51:18.774307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.162 [2024-10-07 14:51:18.774319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.162 [2024-10-07 14:51:18.774554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.162 [2024-10-07 14:51:18.774788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.162 [2024-10-07 14:51:18.774801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.162 [2024-10-07 14:51:18.774811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.162 [2024-10-07 14:51:18.778528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.162 [2024-10-07 14:51:18.787760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.162 [2024-10-07 14:51:18.788325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.162 [2024-10-07 14:51:18.788349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.162 [2024-10-07 14:51:18.788360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.162 [2024-10-07 14:51:18.788594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.162 [2024-10-07 14:51:18.788829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.162 [2024-10-07 14:51:18.788841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.162 [2024-10-07 14:51:18.788851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.162 [2024-10-07 14:51:18.792578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.162 [2024-10-07 14:51:18.801806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.162 [2024-10-07 14:51:18.802500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.162 [2024-10-07 14:51:18.802548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.162 [2024-10-07 14:51:18.802565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.162 [2024-10-07 14:51:18.802832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.162 [2024-10-07 14:51:18.803087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.162 [2024-10-07 14:51:18.803102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.162 [2024-10-07 14:51:18.803113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.162 [2024-10-07 14:51:18.807054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.162 [2024-10-07 14:51:18.815865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.162 [2024-10-07 14:51:18.816557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.162 [2024-10-07 14:51:18.816604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.162 [2024-10-07 14:51:18.816620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.162 [2024-10-07 14:51:18.816886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.162 [2024-10-07 14:51:18.817137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.162 [2024-10-07 14:51:18.817152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.162 [2024-10-07 14:51:18.817164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.162 [2024-10-07 14:51:18.820893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.162 [2024-10-07 14:51:18.829920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.162 [2024-10-07 14:51:18.830496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.162 [2024-10-07 14:51:18.830522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.162 [2024-10-07 14:51:18.830533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.162 [2024-10-07 14:51:18.830768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.162 [2024-10-07 14:51:18.831020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.162 [2024-10-07 14:51:18.831033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.162 [2024-10-07 14:51:18.831044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.162 [2024-10-07 14:51:18.834766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.162 [2024-10-07 14:51:18.844008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.162 [2024-10-07 14:51:18.844614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.162 [2024-10-07 14:51:18.844638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.162 [2024-10-07 14:51:18.844649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.162 [2024-10-07 14:51:18.844884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.162 [2024-10-07 14:51:18.845125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.162 [2024-10-07 14:51:18.845139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.162 [2024-10-07 14:51:18.845149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.162 [2024-10-07 14:51:18.848870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.162 [2024-10-07 14:51:18.858112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.162 [2024-10-07 14:51:18.858713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.162 [2024-10-07 14:51:18.858737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.162 [2024-10-07 14:51:18.858748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.162 [2024-10-07 14:51:18.858983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.162 [2024-10-07 14:51:18.859227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.162 [2024-10-07 14:51:18.859240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.162 [2024-10-07 14:51:18.859249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.162 [2024-10-07 14:51:18.862991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.424 [2024-10-07 14:51:18.872239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.424 [2024-10-07 14:51:18.872795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.424 [2024-10-07 14:51:18.872817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.424 [2024-10-07 14:51:18.872829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.424 [2024-10-07 14:51:18.873071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.424 [2024-10-07 14:51:18.873306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.424 [2024-10-07 14:51:18.873321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.424 [2024-10-07 14:51:18.873332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.424 [2024-10-07 14:51:18.877148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.424 [2024-10-07 14:51:18.886377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.424 [2024-10-07 14:51:18.886978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.424 [2024-10-07 14:51:18.887009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.424 [2024-10-07 14:51:18.887020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.424 [2024-10-07 14:51:18.887255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.424 [2024-10-07 14:51:18.887490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.424 [2024-10-07 14:51:18.887503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.425 [2024-10-07 14:51:18.887512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.425 [2024-10-07 14:51:18.891234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.425 [2024-10-07 14:51:18.900466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.425 [2024-10-07 14:51:18.901056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.425 [2024-10-07 14:51:18.901080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.425 [2024-10-07 14:51:18.901099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.425 [2024-10-07 14:51:18.901334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.425 [2024-10-07 14:51:18.901568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.425 [2024-10-07 14:51:18.901580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.425 [2024-10-07 14:51:18.901590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.425 [2024-10-07 14:51:18.905317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.425 [2024-10-07 14:51:18.914550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.425 [2024-10-07 14:51:18.915250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.425 [2024-10-07 14:51:18.915297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.425 [2024-10-07 14:51:18.915313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.425 [2024-10-07 14:51:18.915581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.425 [2024-10-07 14:51:18.915821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.425 [2024-10-07 14:51:18.915835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.425 [2024-10-07 14:51:18.915846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.425 [2024-10-07 14:51:18.919578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.425 [2024-10-07 14:51:18.928590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.425 [2024-10-07 14:51:18.929324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.425 [2024-10-07 14:51:18.929373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.425 [2024-10-07 14:51:18.929389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.425 [2024-10-07 14:51:18.929657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.425 [2024-10-07 14:51:18.929898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.425 [2024-10-07 14:51:18.929912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.425 [2024-10-07 14:51:18.929923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.425 [2024-10-07 14:51:18.933664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.425 [2024-10-07 14:51:18.942681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.425 [2024-10-07 14:51:18.943272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.425 [2024-10-07 14:51:18.943298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.425 [2024-10-07 14:51:18.943310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.425 [2024-10-07 14:51:18.943546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.425 [2024-10-07 14:51:18.943786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.425 [2024-10-07 14:51:18.943799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.425 [2024-10-07 14:51:18.943809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.425 [2024-10-07 14:51:18.947535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.425 [2024-10-07 14:51:18.956771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.425 [2024-10-07 14:51:18.957436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.425 [2024-10-07 14:51:18.957483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.425 [2024-10-07 14:51:18.957499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.425 [2024-10-07 14:51:18.957766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.425 [2024-10-07 14:51:18.958016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.425 [2024-10-07 14:51:18.958030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.425 [2024-10-07 14:51:18.958041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.425 [2024-10-07 14:51:18.961770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.425 [2024-10-07 14:51:18.970827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.425 [2024-10-07 14:51:18.971531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.425 [2024-10-07 14:51:18.971557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.425 [2024-10-07 14:51:18.971569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.425 [2024-10-07 14:51:18.971805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.425 [2024-10-07 14:51:18.972048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.425 [2024-10-07 14:51:18.972062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.425 [2024-10-07 14:51:18.972071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.425 [2024-10-07 14:51:18.975794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.425 [2024-10-07 14:51:18.985023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.425 [2024-10-07 14:51:18.985581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.425 [2024-10-07 14:51:18.985605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.425 [2024-10-07 14:51:18.985615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.425 [2024-10-07 14:51:18.985850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.425 [2024-10-07 14:51:18.986092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.425 [2024-10-07 14:51:18.986105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.425 [2024-10-07 14:51:18.986115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.425 [2024-10-07 14:51:18.989837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.425 [2024-10-07 14:51:18.999072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.425 [2024-10-07 14:51:18.999538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.425 [2024-10-07 14:51:18.999563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.425 [2024-10-07 14:51:18.999574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.425 [2024-10-07 14:51:18.999809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.425 [2024-10-07 14:51:19.000055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.425 [2024-10-07 14:51:19.000070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.425 [2024-10-07 14:51:19.000080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.425 [2024-10-07 14:51:19.003802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.425 [2024-10-07 14:51:19.013250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.425 [2024-10-07 14:51:19.013845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.425 [2024-10-07 14:51:19.013868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.425 [2024-10-07 14:51:19.013878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.425 [2024-10-07 14:51:19.014118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.425 [2024-10-07 14:51:19.014354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.425 [2024-10-07 14:51:19.014366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.425 [2024-10-07 14:51:19.014375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.425 [2024-10-07 14:51:19.018100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.425 [2024-10-07 14:51:19.027327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.425 [2024-10-07 14:51:19.028031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.425 [2024-10-07 14:51:19.028078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.425 [2024-10-07 14:51:19.028094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.425 [2024-10-07 14:51:19.028361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.425 [2024-10-07 14:51:19.028601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.425 [2024-10-07 14:51:19.028616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.425 [2024-10-07 14:51:19.028627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.425 [2024-10-07 14:51:19.032368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.425 [2024-10-07 14:51:19.041374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.425 [2024-10-07 14:51:19.042069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.426 [2024-10-07 14:51:19.042117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.426 [2024-10-07 14:51:19.042140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.426 [2024-10-07 14:51:19.042409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.426 [2024-10-07 14:51:19.042649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.426 [2024-10-07 14:51:19.042663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.426 [2024-10-07 14:51:19.042673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.426 [2024-10-07 14:51:19.046403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.426 [2024-10-07 14:51:19.055423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.426 [2024-10-07 14:51:19.056008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.426 [2024-10-07 14:51:19.056034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.426 [2024-10-07 14:51:19.056046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.426 [2024-10-07 14:51:19.056281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.426 [2024-10-07 14:51:19.056516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.426 [2024-10-07 14:51:19.056528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.426 [2024-10-07 14:51:19.056537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.426 [2024-10-07 14:51:19.060258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.426 [2024-10-07 14:51:19.069514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.426 [2024-10-07 14:51:19.070144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.426 [2024-10-07 14:51:19.070192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.426 [2024-10-07 14:51:19.070207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.426 [2024-10-07 14:51:19.070474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.426 [2024-10-07 14:51:19.070715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.426 [2024-10-07 14:51:19.070728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.426 [2024-10-07 14:51:19.070739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.426 [2024-10-07 14:51:19.074472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.426 [2024-10-07 14:51:19.083713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.426 [2024-10-07 14:51:19.084321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.426 [2024-10-07 14:51:19.084348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.426 [2024-10-07 14:51:19.084359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.426 [2024-10-07 14:51:19.084594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.426 [2024-10-07 14:51:19.084834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.426 [2024-10-07 14:51:19.084847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.426 [2024-10-07 14:51:19.084857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.426 [2024-10-07 14:51:19.088580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.426 [2024-10-07 14:51:19.097817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.426 [2024-10-07 14:51:19.098501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.426 [2024-10-07 14:51:19.098549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.426 [2024-10-07 14:51:19.098564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.426 [2024-10-07 14:51:19.098831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.426 [2024-10-07 14:51:19.099083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.426 [2024-10-07 14:51:19.099098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.426 [2024-10-07 14:51:19.099109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.426 [2024-10-07 14:51:19.102834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.426 [2024-10-07 14:51:19.111858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.426 [2024-10-07 14:51:19.112451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.426 [2024-10-07 14:51:19.112477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.426 [2024-10-07 14:51:19.112489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.426 [2024-10-07 14:51:19.112725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.426 [2024-10-07 14:51:19.112961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.426 [2024-10-07 14:51:19.112974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.426 [2024-10-07 14:51:19.112985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.426 [2024-10-07 14:51:19.116717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.426 [2024-10-07 14:51:19.125957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.426 [2024-10-07 14:51:19.126562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.426 [2024-10-07 14:51:19.126589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.426 [2024-10-07 14:51:19.126601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.426 [2024-10-07 14:51:19.126835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.426 [2024-10-07 14:51:19.127076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.426 [2024-10-07 14:51:19.127090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.426 [2024-10-07 14:51:19.127100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.426 [2024-10-07 14:51:19.130828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.700 [2024-10-07 14:51:19.140084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.700 [2024-10-07 14:51:19.140778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.700 [2024-10-07 14:51:19.140826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.700 [2024-10-07 14:51:19.140842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.700 [2024-10-07 14:51:19.141120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.700 [2024-10-07 14:51:19.141362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.700 [2024-10-07 14:51:19.141375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.700 [2024-10-07 14:51:19.141386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.700 [2024-10-07 14:51:19.145119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.700 [2024-10-07 14:51:19.154142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.700 [2024-10-07 14:51:19.154769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.700 [2024-10-07 14:51:19.154795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.700 [2024-10-07 14:51:19.154807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.700 [2024-10-07 14:51:19.155051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.700 [2024-10-07 14:51:19.155288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.700 [2024-10-07 14:51:19.155300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.700 [2024-10-07 14:51:19.155310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.700 [2024-10-07 14:51:19.159040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.700 [2024-10-07 14:51:19.168299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.700 [2024-10-07 14:51:19.168900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.700 [2024-10-07 14:51:19.168923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.700 [2024-10-07 14:51:19.168942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.700 [2024-10-07 14:51:19.169183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.700 [2024-10-07 14:51:19.169419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.700 [2024-10-07 14:51:19.169431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.700 [2024-10-07 14:51:19.169441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.700 [2024-10-07 14:51:19.173169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.700 [2024-10-07 14:51:19.182406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.700 [2024-10-07 14:51:19.182956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.700 [2024-10-07 14:51:19.182983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.700 [2024-10-07 14:51:19.182994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.700 [2024-10-07 14:51:19.183236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.700 [2024-10-07 14:51:19.183470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.700 [2024-10-07 14:51:19.183482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.700 [2024-10-07 14:51:19.183492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.700 [2024-10-07 14:51:19.187216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.700 [2024-10-07 14:51:19.196449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.700 [2024-10-07 14:51:19.197022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.700 [2024-10-07 14:51:19.197046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.700 [2024-10-07 14:51:19.197057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.700 [2024-10-07 14:51:19.197292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.700 [2024-10-07 14:51:19.197526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.700 [2024-10-07 14:51:19.197538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.700 [2024-10-07 14:51:19.197547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.700 [2024-10-07 14:51:19.201276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.700 [2024-10-07 14:51:19.210510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.700 [2024-10-07 14:51:19.211250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.701 [2024-10-07 14:51:19.211298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.701 [2024-10-07 14:51:19.211314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.701 [2024-10-07 14:51:19.211581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.701 [2024-10-07 14:51:19.211821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.701 [2024-10-07 14:51:19.211835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.701 [2024-10-07 14:51:19.211846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.701 [2024-10-07 14:51:19.215580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.701 [2024-10-07 14:51:19.224591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.701 [2024-10-07 14:51:19.225123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.701 [2024-10-07 14:51:19.225149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.701 [2024-10-07 14:51:19.225161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.701 [2024-10-07 14:51:19.225397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.701 [2024-10-07 14:51:19.225637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.701 [2024-10-07 14:51:19.225649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.701 [2024-10-07 14:51:19.225659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.701 [2024-10-07 14:51:19.229380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.701 [2024-10-07 14:51:19.238609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.701 [2024-10-07 14:51:19.239264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.701 [2024-10-07 14:51:19.239312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.701 [2024-10-07 14:51:19.239327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.701 [2024-10-07 14:51:19.239594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.701 [2024-10-07 14:51:19.239834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.701 [2024-10-07 14:51:19.239849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.701 [2024-10-07 14:51:19.239860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.701 [2024-10-07 14:51:19.243587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.701 [2024-10-07 14:51:19.252611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.701 [2024-10-07 14:51:19.253303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.701 [2024-10-07 14:51:19.253350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.701 [2024-10-07 14:51:19.253366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.701 [2024-10-07 14:51:19.253632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.701 [2024-10-07 14:51:19.253871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.701 [2024-10-07 14:51:19.253885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.701 [2024-10-07 14:51:19.253896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.701 [2024-10-07 14:51:19.257631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.701 [2024-10-07 14:51:19.266658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.701 [2024-10-07 14:51:19.267265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.701 [2024-10-07 14:51:19.267291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.701 [2024-10-07 14:51:19.267303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.701 [2024-10-07 14:51:19.267537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.701 [2024-10-07 14:51:19.267773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.701 [2024-10-07 14:51:19.267785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.701 [2024-10-07 14:51:19.267795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.701 [2024-10-07 14:51:19.271518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.701 [2024-10-07 14:51:19.280737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.701 [2024-10-07 14:51:19.281302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.701 [2024-10-07 14:51:19.281326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.701 [2024-10-07 14:51:19.281337] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.701 [2024-10-07 14:51:19.281572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.701 [2024-10-07 14:51:19.281806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.701 [2024-10-07 14:51:19.281818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.701 [2024-10-07 14:51:19.281828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.701 [2024-10-07 14:51:19.285549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.701 [2024-10-07 14:51:19.294769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.701 [2024-10-07 14:51:19.295413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.701 [2024-10-07 14:51:19.295461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.701 [2024-10-07 14:51:19.295476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.701 [2024-10-07 14:51:19.295743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.701 [2024-10-07 14:51:19.295983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.701 [2024-10-07 14:51:19.295996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.701 [2024-10-07 14:51:19.296018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.701 [2024-10-07 14:51:19.299742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.701 [2024-10-07 14:51:19.308967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.701 [2024-10-07 14:51:19.309647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.701 [2024-10-07 14:51:19.309694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.701 [2024-10-07 14:51:19.309710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.701 [2024-10-07 14:51:19.309977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.701 [2024-10-07 14:51:19.310228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.701 [2024-10-07 14:51:19.310242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.701 [2024-10-07 14:51:19.310253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.701 [2024-10-07 14:51:19.313978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.701 [2024-10-07 14:51:19.322987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.701 [2024-10-07 14:51:19.323709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.701 [2024-10-07 14:51:19.323761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.701 [2024-10-07 14:51:19.323776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.701 [2024-10-07 14:51:19.324053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.701 [2024-10-07 14:51:19.324294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.701 [2024-10-07 14:51:19.324307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.701 [2024-10-07 14:51:19.324318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.701 [2024-10-07 14:51:19.328048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.701 [2024-10-07 14:51:19.337073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.701 [2024-10-07 14:51:19.337671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.701 [2024-10-07 14:51:19.337697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.701 [2024-10-07 14:51:19.337708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.701 [2024-10-07 14:51:19.337945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.701 [2024-10-07 14:51:19.338189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.701 [2024-10-07 14:51:19.338203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.701 [2024-10-07 14:51:19.338213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.701 [2024-10-07 14:51:19.341930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.701 [2024-10-07 14:51:19.351150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.701 [2024-10-07 14:51:19.351806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.701 [2024-10-07 14:51:19.351853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.701 [2024-10-07 14:51:19.351869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.701 [2024-10-07 14:51:19.352145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.702 [2024-10-07 14:51:19.352386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.702 [2024-10-07 14:51:19.352400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.702 [2024-10-07 14:51:19.352411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.702 [2024-10-07 14:51:19.356139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.702 [2024-10-07 14:51:19.365177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.702 [2024-10-07 14:51:19.365881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.702 [2024-10-07 14:51:19.365929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.702 [2024-10-07 14:51:19.365945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.702 [2024-10-07 14:51:19.366222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.702 [2024-10-07 14:51:19.366476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.702 [2024-10-07 14:51:19.366490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.702 [2024-10-07 14:51:19.366501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.702 [2024-10-07 14:51:19.370226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.702 [2024-10-07 14:51:19.379230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.702 [2024-10-07 14:51:19.379718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.702 [2024-10-07 14:51:19.379744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.702 [2024-10-07 14:51:19.379755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.702 [2024-10-07 14:51:19.379991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.702 [2024-10-07 14:51:19.380233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.702 [2024-10-07 14:51:19.380246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.702 [2024-10-07 14:51:19.380256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.702 [2024-10-07 14:51:19.383968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.702 [2024-10-07 14:51:19.393408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.702 [2024-10-07 14:51:19.393839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.702 [2024-10-07 14:51:19.393863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.702 [2024-10-07 14:51:19.393875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.702 [2024-10-07 14:51:19.394117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.702 [2024-10-07 14:51:19.394352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.702 [2024-10-07 14:51:19.394365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.702 [2024-10-07 14:51:19.394374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.702 [2024-10-07 14:51:19.398088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.702 [2024-10-07 14:51:19.407524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.962 [2024-10-07 14:51:19.408086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.962 [2024-10-07 14:51:19.408111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.962 [2024-10-07 14:51:19.408124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.962 [2024-10-07 14:51:19.408360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.962 [2024-10-07 14:51:19.408596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.962 [2024-10-07 14:51:19.408608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.962 [2024-10-07 14:51:19.408623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.962 [2024-10-07 14:51:19.412343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.962 [2024-10-07 14:51:19.421558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.962 [2024-10-07 14:51:19.422280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.962 [2024-10-07 14:51:19.422327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.962 [2024-10-07 14:51:19.422343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.962 [2024-10-07 14:51:19.422610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.962 [2024-10-07 14:51:19.422850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.963 [2024-10-07 14:51:19.422864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.963 [2024-10-07 14:51:19.422874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.963 [2024-10-07 14:51:19.426604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.963 [2024-10-07 14:51:19.435621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.963 [2024-10-07 14:51:19.436133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.963 [2024-10-07 14:51:19.436180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.963 [2024-10-07 14:51:19.436197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.963 [2024-10-07 14:51:19.436464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.963 [2024-10-07 14:51:19.436704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.963 [2024-10-07 14:51:19.436717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.963 [2024-10-07 14:51:19.436728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.963 [2024-10-07 14:51:19.440457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.963 [2024-10-07 14:51:19.449690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.963 [2024-10-07 14:51:19.450409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.963 [2024-10-07 14:51:19.450456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.963 [2024-10-07 14:51:19.450473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.963 [2024-10-07 14:51:19.450739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.963 [2024-10-07 14:51:19.450979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.963 [2024-10-07 14:51:19.450993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.963 [2024-10-07 14:51:19.451013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.963 [2024-10-07 14:51:19.454740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.963 [2024-10-07 14:51:19.463747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.963 [2024-10-07 14:51:19.464445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.963 [2024-10-07 14:51:19.464497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.963 [2024-10-07 14:51:19.464513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.963 [2024-10-07 14:51:19.464780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.963 [2024-10-07 14:51:19.465046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.963 [2024-10-07 14:51:19.465062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.963 [2024-10-07 14:51:19.465073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.963 [2024-10-07 14:51:19.468796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.963 [2024-10-07 14:51:19.477809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.963 [2024-10-07 14:51:19.478539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.963 [2024-10-07 14:51:19.478587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.963 [2024-10-07 14:51:19.478602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.963 [2024-10-07 14:51:19.478869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.963 [2024-10-07 14:51:19.479122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.963 [2024-10-07 14:51:19.479136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.963 [2024-10-07 14:51:19.479147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.963 [2024-10-07 14:51:19.482875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.963 [2024-10-07 14:51:19.491888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.963 [2024-10-07 14:51:19.492638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.963 [2024-10-07 14:51:19.492686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.963 [2024-10-07 14:51:19.492702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.963 [2024-10-07 14:51:19.492968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.963 [2024-10-07 14:51:19.493218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.963 [2024-10-07 14:51:19.493233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.963 [2024-10-07 14:51:19.493244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.963 [2024-10-07 14:51:19.496968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.963 [2024-10-07 14:51:19.505975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.963 [2024-10-07 14:51:19.506656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.963 [2024-10-07 14:51:19.506703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.963 [2024-10-07 14:51:19.506718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.963 [2024-10-07 14:51:19.506993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.963 [2024-10-07 14:51:19.507243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.963 [2024-10-07 14:51:19.507256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.963 [2024-10-07 14:51:19.507267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.963 [2024-10-07 14:51:19.510988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.963 [2024-10-07 14:51:19.519997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.963 [2024-10-07 14:51:19.520704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.963 [2024-10-07 14:51:19.520752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.963 [2024-10-07 14:51:19.520768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.963 [2024-10-07 14:51:19.521044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.963 [2024-10-07 14:51:19.521285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.963 [2024-10-07 14:51:19.521298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.963 [2024-10-07 14:51:19.521309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.963 [2024-10-07 14:51:19.525033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.963 [2024-10-07 14:51:19.534048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.963 [2024-10-07 14:51:19.534762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.963 [2024-10-07 14:51:19.534809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.963 [2024-10-07 14:51:19.534826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.963 [2024-10-07 14:51:19.535103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.963 [2024-10-07 14:51:19.535345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.963 [2024-10-07 14:51:19.535359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.963 [2024-10-07 14:51:19.535369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.963 [2024-10-07 14:51:19.539096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.963 [2024-10-07 14:51:19.548104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.963 [2024-10-07 14:51:19.548811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.963 [2024-10-07 14:51:19.548858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.963 [2024-10-07 14:51:19.548874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.963 [2024-10-07 14:51:19.549151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.963 [2024-10-07 14:51:19.549392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.963 [2024-10-07 14:51:19.549405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.963 [2024-10-07 14:51:19.549421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.963 [2024-10-07 14:51:19.553145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.963 [2024-10-07 14:51:19.562157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.963 [2024-10-07 14:51:19.562785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.963 [2024-10-07 14:51:19.562810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.963 [2024-10-07 14:51:19.562822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.963 [2024-10-07 14:51:19.563064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.964 [2024-10-07 14:51:19.563300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.964 [2024-10-07 14:51:19.563312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.964 [2024-10-07 14:51:19.563328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.964 [2024-10-07 14:51:19.567068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.964 [2024-10-07 14:51:19.576287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.964 [2024-10-07 14:51:19.576881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.964 [2024-10-07 14:51:19.576905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.964 [2024-10-07 14:51:19.576916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.964 [2024-10-07 14:51:19.577158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.964 [2024-10-07 14:51:19.577394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.964 [2024-10-07 14:51:19.577406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.964 [2024-10-07 14:51:19.577415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.964 [2024-10-07 14:51:19.581137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.964 [2024-10-07 14:51:19.590362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.964 [2024-10-07 14:51:19.590909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.964 [2024-10-07 14:51:19.590931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.964 [2024-10-07 14:51:19.590942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.964 [2024-10-07 14:51:19.591184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.964 [2024-10-07 14:51:19.591420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.964 [2024-10-07 14:51:19.591431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.964 [2024-10-07 14:51:19.591441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.964 [2024-10-07 14:51:19.595156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.964 [2024-10-07 14:51:19.604368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.964 [2024-10-07 14:51:19.604968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.964 [2024-10-07 14:51:19.604991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.964 [2024-10-07 14:51:19.605009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.964 [2024-10-07 14:51:19.605244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.964 [2024-10-07 14:51:19.605478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.964 [2024-10-07 14:51:19.605491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.964 [2024-10-07 14:51:19.605500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.964 [2024-10-07 14:51:19.609220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.964 [2024-10-07 14:51:19.618438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.964 [2024-10-07 14:51:19.619123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.964 [2024-10-07 14:51:19.619170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.964 [2024-10-07 14:51:19.619187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.964 [2024-10-07 14:51:19.619454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.964 [2024-10-07 14:51:19.619695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.964 [2024-10-07 14:51:19.619708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.964 [2024-10-07 14:51:19.619720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.964 [2024-10-07 14:51:19.623452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.964 [2024-10-07 14:51:19.632470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.964 [2024-10-07 14:51:19.633190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.964 [2024-10-07 14:51:19.633238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.964 [2024-10-07 14:51:19.633253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.964 [2024-10-07 14:51:19.633520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.964 [2024-10-07 14:51:19.633761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.964 [2024-10-07 14:51:19.633774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.964 [2024-10-07 14:51:19.633785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.964 [2024-10-07 14:51:19.637517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.964 [2024-10-07 14:51:19.646523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.964 [2024-10-07 14:51:19.647294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.964 [2024-10-07 14:51:19.647342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.964 [2024-10-07 14:51:19.647358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.964 [2024-10-07 14:51:19.647629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.964 [2024-10-07 14:51:19.647869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.964 [2024-10-07 14:51:19.647883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.964 [2024-10-07 14:51:19.647893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.964 [2024-10-07 14:51:19.651623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:55.964 [2024-10-07 14:51:19.660634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:55.964 [2024-10-07 14:51:19.661329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:55.964 [2024-10-07 14:51:19.661376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:55.964 [2024-10-07 14:51:19.661392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:55.964 [2024-10-07 14:51:19.661659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:55.964 [2024-10-07 14:51:19.661899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:55.964 [2024-10-07 14:51:19.661912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:55.964 [2024-10-07 14:51:19.661923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:55.964 [2024-10-07 14:51:19.665677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.225 5430.00 IOPS, 21.21 MiB/s [2024-10-07T12:51:19.934Z] [2024-10-07 14:51:19.674655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.225 [2024-10-07 14:51:19.675381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.225 [2024-10-07 14:51:19.675429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.225 [2024-10-07 14:51:19.675445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.225 [2024-10-07 14:51:19.675711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.225 [2024-10-07 14:51:19.675952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.225 [2024-10-07 14:51:19.675965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.225 [2024-10-07 14:51:19.675976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.225 [2024-10-07 14:51:19.679703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.225 [2024-10-07 14:51:19.688713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.225 [2024-10-07 14:51:19.689332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.225 [2024-10-07 14:51:19.689358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.225 [2024-10-07 14:51:19.689369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.225 [2024-10-07 14:51:19.689605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.225 [2024-10-07 14:51:19.689840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.225 [2024-10-07 14:51:19.689853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.225 [2024-10-07 14:51:19.689867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.225 [2024-10-07 14:51:19.693589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.225 [2024-10-07 14:51:19.702808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.225 [2024-10-07 14:51:19.703391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.225 [2024-10-07 14:51:19.703414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.225 [2024-10-07 14:51:19.703425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.225 [2024-10-07 14:51:19.703660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.225 [2024-10-07 14:51:19.703894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.225 [2024-10-07 14:51:19.703906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.225 [2024-10-07 14:51:19.703916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.225 [2024-10-07 14:51:19.707635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.225 [2024-10-07 14:51:19.716848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.225 [2024-10-07 14:51:19.717441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.225 [2024-10-07 14:51:19.717465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.225 [2024-10-07 14:51:19.717475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.225 [2024-10-07 14:51:19.717709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.225 [2024-10-07 14:51:19.717944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.225 [2024-10-07 14:51:19.717955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.225 [2024-10-07 14:51:19.717965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.225 [2024-10-07 14:51:19.721682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.225 [2024-10-07 14:51:19.730896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.225 [2024-10-07 14:51:19.731601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.225 [2024-10-07 14:51:19.731648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.225 [2024-10-07 14:51:19.731665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.225 [2024-10-07 14:51:19.731932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.225 [2024-10-07 14:51:19.732182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.225 [2024-10-07 14:51:19.732196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.225 [2024-10-07 14:51:19.732207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.225 [2024-10-07 14:51:19.735929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.225 [2024-10-07 14:51:19.744945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.225 [2024-10-07 14:51:19.745566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.225 [2024-10-07 14:51:19.745593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.226 [2024-10-07 14:51:19.745605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.226 [2024-10-07 14:51:19.745843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.226 [2024-10-07 14:51:19.746084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.226 [2024-10-07 14:51:19.746098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.226 [2024-10-07 14:51:19.746108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.226 [2024-10-07 14:51:19.749826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.226 [2024-10-07 14:51:19.759055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.226 [2024-10-07 14:51:19.759758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.226 [2024-10-07 14:51:19.759805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.226 [2024-10-07 14:51:19.759823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.226 [2024-10-07 14:51:19.760097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.226 [2024-10-07 14:51:19.760338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.226 [2024-10-07 14:51:19.760352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.226 [2024-10-07 14:51:19.760363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.226 [2024-10-07 14:51:19.764095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.226 [2024-10-07 14:51:19.773135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.226 [2024-10-07 14:51:19.773610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.226 [2024-10-07 14:51:19.773636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.226 [2024-10-07 14:51:19.773648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.226 [2024-10-07 14:51:19.773884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.226 [2024-10-07 14:51:19.774126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.226 [2024-10-07 14:51:19.774139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.226 [2024-10-07 14:51:19.774149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.226 [2024-10-07 14:51:19.777864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.226 [2024-10-07 14:51:19.787303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.226 [2024-10-07 14:51:19.787865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.226 [2024-10-07 14:51:19.787889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.226 [2024-10-07 14:51:19.787900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.226 [2024-10-07 14:51:19.788145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.226 [2024-10-07 14:51:19.788380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.226 [2024-10-07 14:51:19.788392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.226 [2024-10-07 14:51:19.788402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.226 [2024-10-07 14:51:19.792118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.226 [2024-10-07 14:51:19.801337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.226 [2024-10-07 14:51:19.801887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.226 [2024-10-07 14:51:19.801910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.226 [2024-10-07 14:51:19.801922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.226 [2024-10-07 14:51:19.802161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.226 [2024-10-07 14:51:19.802397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.226 [2024-10-07 14:51:19.802409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.226 [2024-10-07 14:51:19.802418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.226 [2024-10-07 14:51:19.806334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.226 [2024-10-07 14:51:19.815341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.226 [2024-10-07 14:51:19.815937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.226 [2024-10-07 14:51:19.815960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.226 [2024-10-07 14:51:19.815971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.226 [2024-10-07 14:51:19.816212] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.226 [2024-10-07 14:51:19.816448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.226 [2024-10-07 14:51:19.816460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.226 [2024-10-07 14:51:19.816470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.226 [2024-10-07 14:51:19.820184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.226 [2024-10-07 14:51:19.829400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.226 [2024-10-07 14:51:19.830004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.226 [2024-10-07 14:51:19.830027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.226 [2024-10-07 14:51:19.830038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.226 [2024-10-07 14:51:19.830273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.226 [2024-10-07 14:51:19.830507] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.226 [2024-10-07 14:51:19.830523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.226 [2024-10-07 14:51:19.830533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.226 [2024-10-07 14:51:19.834260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.226 [2024-10-07 14:51:19.843473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.226 [2024-10-07 14:51:19.844053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.226 [2024-10-07 14:51:19.844084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.226 [2024-10-07 14:51:19.844095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.226 [2024-10-07 14:51:19.844337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.226 [2024-10-07 14:51:19.844571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.226 [2024-10-07 14:51:19.844583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.226 [2024-10-07 14:51:19.844593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.226 [2024-10-07 14:51:19.848319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.226 [2024-10-07 14:51:19.857537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.226 [2024-10-07 14:51:19.858107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.226 [2024-10-07 14:51:19.858154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.226 [2024-10-07 14:51:19.858172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.226 [2024-10-07 14:51:19.858441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.226 [2024-10-07 14:51:19.858681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.226 [2024-10-07 14:51:19.858693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.226 [2024-10-07 14:51:19.858705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.226 [2024-10-07 14:51:19.862437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.226 [2024-10-07 14:51:19.871695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.226 [2024-10-07 14:51:19.872457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.226 [2024-10-07 14:51:19.872504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.226 [2024-10-07 14:51:19.872520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.226 [2024-10-07 14:51:19.872786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.226 [2024-10-07 14:51:19.873036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.226 [2024-10-07 14:51:19.873050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.226 [2024-10-07 14:51:19.873061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.226 [2024-10-07 14:51:19.876783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.226 [2024-10-07 14:51:19.885793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.226 [2024-10-07 14:51:19.886507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.226 [2024-10-07 14:51:19.886554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.226 [2024-10-07 14:51:19.886571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.226 [2024-10-07 14:51:19.886838] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.226 [2024-10-07 14:51:19.887087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.226 [2024-10-07 14:51:19.887102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.226 [2024-10-07 14:51:19.887113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.226 [2024-10-07 14:51:19.890833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.227 [2024-10-07 14:51:19.899838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.227 [2024-10-07 14:51:19.900544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.227 [2024-10-07 14:51:19.900592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.227 [2024-10-07 14:51:19.900608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.227 [2024-10-07 14:51:19.900874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.227 [2024-10-07 14:51:19.901125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.227 [2024-10-07 14:51:19.901140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.227 [2024-10-07 14:51:19.901151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.227 [2024-10-07 14:51:19.904953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.227 [2024-10-07 14:51:19.913968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.227 [2024-10-07 14:51:19.914679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.227 [2024-10-07 14:51:19.914727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.227 [2024-10-07 14:51:19.914743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.227 [2024-10-07 14:51:19.915019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.227 [2024-10-07 14:51:19.915261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.227 [2024-10-07 14:51:19.915274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.227 [2024-10-07 14:51:19.915285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.227 [2024-10-07 14:51:19.919009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.227 [2024-10-07 14:51:19.928020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.227 [2024-10-07 14:51:19.928701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.227 [2024-10-07 14:51:19.928749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.227 [2024-10-07 14:51:19.928765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.227 [2024-10-07 14:51:19.929049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.227 [2024-10-07 14:51:19.929291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.227 [2024-10-07 14:51:19.929305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.227 [2024-10-07 14:51:19.929315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.227 [2024-10-07 14:51:19.933049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.487 [2024-10-07 14:51:19.942065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.487 [2024-10-07 14:51:19.942703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.487 [2024-10-07 14:51:19.942729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.487 [2024-10-07 14:51:19.942742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.487 [2024-10-07 14:51:19.942977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.487 [2024-10-07 14:51:19.943221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.487 [2024-10-07 14:51:19.943235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.487 [2024-10-07 14:51:19.943245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.487 [2024-10-07 14:51:19.946963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.487 [2024-10-07 14:51:19.956190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.487 [2024-10-07 14:51:19.956845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.487 [2024-10-07 14:51:19.956893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.487 [2024-10-07 14:51:19.956908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.487 [2024-10-07 14:51:19.957185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.487 [2024-10-07 14:51:19.957425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.487 [2024-10-07 14:51:19.957439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.488 [2024-10-07 14:51:19.957450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.488 [2024-10-07 14:51:19.961179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.488 [2024-10-07 14:51:19.970210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.488 [2024-10-07 14:51:19.970822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.488 [2024-10-07 14:51:19.970856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.488 [2024-10-07 14:51:19.970868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.488 [2024-10-07 14:51:19.971110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.488 [2024-10-07 14:51:19.971346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.488 [2024-10-07 14:51:19.971362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.488 [2024-10-07 14:51:19.971373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.488 [2024-10-07 14:51:19.975093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.488 [2024-10-07 14:51:19.984305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.488 [2024-10-07 14:51:19.984911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.488 [2024-10-07 14:51:19.984935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.488 [2024-10-07 14:51:19.984946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.488 [2024-10-07 14:51:19.985186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.488 [2024-10-07 14:51:19.985421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.488 [2024-10-07 14:51:19.985433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.488 [2024-10-07 14:51:19.985443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.488 [2024-10-07 14:51:19.989160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.488 [2024-10-07 14:51:19.998470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.488 [2024-10-07 14:51:19.999020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.488 [2024-10-07 14:51:19.999046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.488 [2024-10-07 14:51:19.999058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.488 [2024-10-07 14:51:19.999296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.488 [2024-10-07 14:51:19.999531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.488 [2024-10-07 14:51:19.999544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.488 [2024-10-07 14:51:19.999554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.488 [2024-10-07 14:51:20.003771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.488 [2024-10-07 14:51:20.012576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.488 [2024-10-07 14:51:20.013306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.488 [2024-10-07 14:51:20.013354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.488 [2024-10-07 14:51:20.013371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.488 [2024-10-07 14:51:20.013638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.488 [2024-10-07 14:51:20.013878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.488 [2024-10-07 14:51:20.013891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.488 [2024-10-07 14:51:20.013903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.488 [2024-10-07 14:51:20.017639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.488 [2024-10-07 14:51:20.026666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.488 [2024-10-07 14:51:20.027367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.488 [2024-10-07 14:51:20.027415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.488 [2024-10-07 14:51:20.027431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.488 [2024-10-07 14:51:20.027698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.488 [2024-10-07 14:51:20.027938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.488 [2024-10-07 14:51:20.027951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.488 [2024-10-07 14:51:20.027964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.488 [2024-10-07 14:51:20.031715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.488 [2024-10-07 14:51:20.040743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.488 [2024-10-07 14:51:20.041455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.488 [2024-10-07 14:51:20.041502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.488 [2024-10-07 14:51:20.041518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.488 [2024-10-07 14:51:20.041784] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.488 [2024-10-07 14:51:20.042032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.488 [2024-10-07 14:51:20.042046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.488 [2024-10-07 14:51:20.042058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.488 [2024-10-07 14:51:20.045790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.488 [2024-10-07 14:51:20.054819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.488 [2024-10-07 14:51:20.055486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.488 [2024-10-07 14:51:20.055534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.488 [2024-10-07 14:51:20.055550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.488 [2024-10-07 14:51:20.055817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.488 [2024-10-07 14:51:20.056066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.488 [2024-10-07 14:51:20.056081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.488 [2024-10-07 14:51:20.056092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.488 [2024-10-07 14:51:20.059818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.488 [2024-10-07 14:51:20.068863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.488 [2024-10-07 14:51:20.069579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.488 [2024-10-07 14:51:20.069628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.488 [2024-10-07 14:51:20.069650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.488 [2024-10-07 14:51:20.069920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.488 [2024-10-07 14:51:20.070173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.488 [2024-10-07 14:51:20.070188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.488 [2024-10-07 14:51:20.070199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.488 [2024-10-07 14:51:20.073925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.488 [2024-10-07 14:51:20.082944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.488 [2024-10-07 14:51:20.083650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.488 [2024-10-07 14:51:20.083698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.488 [2024-10-07 14:51:20.083713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.488 [2024-10-07 14:51:20.083982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.488 [2024-10-07 14:51:20.084232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.488 [2024-10-07 14:51:20.084247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.488 [2024-10-07 14:51:20.084258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.488 [2024-10-07 14:51:20.087983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.488 [2024-10-07 14:51:20.097005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.488 [2024-10-07 14:51:20.097642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.488 [2024-10-07 14:51:20.097668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.488 [2024-10-07 14:51:20.097680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.488 [2024-10-07 14:51:20.097916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.488 [2024-10-07 14:51:20.098159] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.488 [2024-10-07 14:51:20.098172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.488 [2024-10-07 14:51:20.098182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.488 [2024-10-07 14:51:20.101904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.488 [2024-10-07 14:51:20.111131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.488 [2024-10-07 14:51:20.111701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.488 [2024-10-07 14:51:20.111724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.488 [2024-10-07 14:51:20.111735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.488 [2024-10-07 14:51:20.111970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.488 [2024-10-07 14:51:20.112213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.488 [2024-10-07 14:51:20.112231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.488 [2024-10-07 14:51:20.112240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.488 [2024-10-07 14:51:20.115962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.488 [2024-10-07 14:51:20.125268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.488 [2024-10-07 14:51:20.125943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.488 [2024-10-07 14:51:20.125991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.488 [2024-10-07 14:51:20.126016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.488 [2024-10-07 14:51:20.126285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.488 [2024-10-07 14:51:20.126525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.488 [2024-10-07 14:51:20.126540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.488 [2024-10-07 14:51:20.126552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.488 [2024-10-07 14:51:20.130283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.488 [2024-10-07 14:51:20.139313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.488 [2024-10-07 14:51:20.139889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.488 [2024-10-07 14:51:20.139915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.488 [2024-10-07 14:51:20.139927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.488 [2024-10-07 14:51:20.140170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.488 [2024-10-07 14:51:20.140409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.488 [2024-10-07 14:51:20.140422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.488 [2024-10-07 14:51:20.140431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.488 [2024-10-07 14:51:20.144149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.488 [2024-10-07 14:51:20.153369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.488 [2024-10-07 14:51:20.153925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.488 [2024-10-07 14:51:20.153949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.488 [2024-10-07 14:51:20.153960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.488 [2024-10-07 14:51:20.154203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.488 [2024-10-07 14:51:20.154440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.488 [2024-10-07 14:51:20.154452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.488 [2024-10-07 14:51:20.154461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.488 [2024-10-07 14:51:20.158179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.488 [2024-10-07 14:51:20.167424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.488 [2024-10-07 14:51:20.168011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.488 [2024-10-07 14:51:20.168035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.488 [2024-10-07 14:51:20.168046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.488 [2024-10-07 14:51:20.168288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.488 [2024-10-07 14:51:20.168523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.488 [2024-10-07 14:51:20.168535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.488 [2024-10-07 14:51:20.168544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.488 [2024-10-07 14:51:20.172264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.488 [2024-10-07 14:51:20.181482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.488 [2024-10-07 14:51:20.182217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.488 [2024-10-07 14:51:20.182264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.488 [2024-10-07 14:51:20.182280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.488 [2024-10-07 14:51:20.182547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.488 [2024-10-07 14:51:20.182788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.488 [2024-10-07 14:51:20.182802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.488 [2024-10-07 14:51:20.182812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.488 [2024-10-07 14:51:20.186547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.749 [2024-10-07 14:51:20.195603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.749 [2024-10-07 14:51:20.196320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.749 [2024-10-07 14:51:20.196368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.749 [2024-10-07 14:51:20.196383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.749 [2024-10-07 14:51:20.196651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.749 [2024-10-07 14:51:20.196891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.749 [2024-10-07 14:51:20.196905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.749 [2024-10-07 14:51:20.196916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.749 [2024-10-07 14:51:20.200647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.749 [2024-10-07 14:51:20.209664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.749 [2024-10-07 14:51:20.210353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.749 [2024-10-07 14:51:20.210402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.749 [2024-10-07 14:51:20.210422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.749 [2024-10-07 14:51:20.210690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.749 [2024-10-07 14:51:20.210931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.749 [2024-10-07 14:51:20.210945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.749 [2024-10-07 14:51:20.210956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.749 [2024-10-07 14:51:20.214695] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.749 [2024-10-07 14:51:20.223715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.749 [2024-10-07 14:51:20.224241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.749 [2024-10-07 14:51:20.224288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.749 [2024-10-07 14:51:20.224305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.749 [2024-10-07 14:51:20.224572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.749 [2024-10-07 14:51:20.224813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.749 [2024-10-07 14:51:20.224827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.749 [2024-10-07 14:51:20.224837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.749 [2024-10-07 14:51:20.228578] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.749 [2024-10-07 14:51:20.237828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.749 [2024-10-07 14:51:20.238361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.749 [2024-10-07 14:51:20.238387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.749 [2024-10-07 14:51:20.238399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.749 [2024-10-07 14:51:20.238634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.749 [2024-10-07 14:51:20.238870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.749 [2024-10-07 14:51:20.238882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.749 [2024-10-07 14:51:20.238892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.749 [2024-10-07 14:51:20.242614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.749 [2024-10-07 14:51:20.251845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.749 [2024-10-07 14:51:20.252279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.749 [2024-10-07 14:51:20.252304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.749 [2024-10-07 14:51:20.252315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.749 [2024-10-07 14:51:20.252551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.749 [2024-10-07 14:51:20.252786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.749 [2024-10-07 14:51:20.252802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.749 [2024-10-07 14:51:20.252813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.749 [2024-10-07 14:51:20.256547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.749 [2024-10-07 14:51:20.265998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.749 [2024-10-07 14:51:20.266600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.749 [2024-10-07 14:51:20.266624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.749 [2024-10-07 14:51:20.266635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.749 [2024-10-07 14:51:20.266870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.749 [2024-10-07 14:51:20.267111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.749 [2024-10-07 14:51:20.267124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.749 [2024-10-07 14:51:20.267134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.749 [2024-10-07 14:51:20.270869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.749 [2024-10-07 14:51:20.280104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.749 [2024-10-07 14:51:20.280721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.749 [2024-10-07 14:51:20.280746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.749 [2024-10-07 14:51:20.280757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.749 [2024-10-07 14:51:20.280992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.749 [2024-10-07 14:51:20.281236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.749 [2024-10-07 14:51:20.281249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.749 [2024-10-07 14:51:20.281259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.749 [2024-10-07 14:51:20.284983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.749 [2024-10-07 14:51:20.294216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.749 [2024-10-07 14:51:20.294937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.749 [2024-10-07 14:51:20.294985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.749 [2024-10-07 14:51:20.295009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.749 [2024-10-07 14:51:20.295279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.749 [2024-10-07 14:51:20.295520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.749 [2024-10-07 14:51:20.295533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.749 [2024-10-07 14:51:20.295545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.749 [2024-10-07 14:51:20.299274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.749 [2024-10-07 14:51:20.308294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.749 [2024-10-07 14:51:20.309021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.749 [2024-10-07 14:51:20.309068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.749 [2024-10-07 14:51:20.309085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.749 [2024-10-07 14:51:20.309352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.749 [2024-10-07 14:51:20.309593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.749 [2024-10-07 14:51:20.309606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.749 [2024-10-07 14:51:20.309617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.749 [2024-10-07 14:51:20.313354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.749 [2024-10-07 14:51:20.322366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.749 [2024-10-07 14:51:20.322989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.749 [2024-10-07 14:51:20.323021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.749 [2024-10-07 14:51:20.323033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.749 [2024-10-07 14:51:20.323270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.749 [2024-10-07 14:51:20.323506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.749 [2024-10-07 14:51:20.323519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.749 [2024-10-07 14:51:20.323529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.749 [2024-10-07 14:51:20.327258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.750 [2024-10-07 14:51:20.336499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.750 [2024-10-07 14:51:20.337143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.750 [2024-10-07 14:51:20.337191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.750 [2024-10-07 14:51:20.337206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.750 [2024-10-07 14:51:20.337473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.750 [2024-10-07 14:51:20.337714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.750 [2024-10-07 14:51:20.337727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.750 [2024-10-07 14:51:20.337738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.750 [2024-10-07 14:51:20.341470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.750 [2024-10-07 14:51:20.350726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.750 [2024-10-07 14:51:20.351361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.750 [2024-10-07 14:51:20.351387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.750 [2024-10-07 14:51:20.351403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.750 [2024-10-07 14:51:20.351640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.750 [2024-10-07 14:51:20.351876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.750 [2024-10-07 14:51:20.351889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.750 [2024-10-07 14:51:20.351899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.750 [2024-10-07 14:51:20.355639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.750 [2024-10-07 14:51:20.364873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.750 [2024-10-07 14:51:20.365428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.750 [2024-10-07 14:51:20.365476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.750 [2024-10-07 14:51:20.365491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.750 [2024-10-07 14:51:20.365758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.750 [2024-10-07 14:51:20.365999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.750 [2024-10-07 14:51:20.366030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.750 [2024-10-07 14:51:20.366042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.750 [2024-10-07 14:51:20.369791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.750 [2024-10-07 14:51:20.379039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.750 [2024-10-07 14:51:20.379773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.750 [2024-10-07 14:51:20.379821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.750 [2024-10-07 14:51:20.379838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.750 [2024-10-07 14:51:20.380113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.750 [2024-10-07 14:51:20.380356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.750 [2024-10-07 14:51:20.380370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.750 [2024-10-07 14:51:20.380381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.750 [2024-10-07 14:51:20.384108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.750 [2024-10-07 14:51:20.393126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.750 [2024-10-07 14:51:20.393751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.750 [2024-10-07 14:51:20.393777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.750 [2024-10-07 14:51:20.393790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.750 [2024-10-07 14:51:20.394033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.750 [2024-10-07 14:51:20.394270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.750 [2024-10-07 14:51:20.394286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.750 [2024-10-07 14:51:20.394296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.750 [2024-10-07 14:51:20.398015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.750 [2024-10-07 14:51:20.407243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.750 [2024-10-07 14:51:20.407849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.750 [2024-10-07 14:51:20.407897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.750 [2024-10-07 14:51:20.407914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.750 [2024-10-07 14:51:20.408193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.750 [2024-10-07 14:51:20.408435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.750 [2024-10-07 14:51:20.408449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.750 [2024-10-07 14:51:20.408460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.750 [2024-10-07 14:51:20.412189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.750 [2024-10-07 14:51:20.421424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.750 [2024-10-07 14:51:20.422110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.750 [2024-10-07 14:51:20.422158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.750 [2024-10-07 14:51:20.422176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.750 [2024-10-07 14:51:20.422445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.750 [2024-10-07 14:51:20.422686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.750 [2024-10-07 14:51:20.422700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.750 [2024-10-07 14:51:20.422710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.750 [2024-10-07 14:51:20.426448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.750 [2024-10-07 14:51:20.435477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.750 [2024-10-07 14:51:20.436066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.750 [2024-10-07 14:51:20.436093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.750 [2024-10-07 14:51:20.436104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.750 [2024-10-07 14:51:20.436341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.750 [2024-10-07 14:51:20.436577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.750 [2024-10-07 14:51:20.436589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.750 [2024-10-07 14:51:20.436599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.750 [2024-10-07 14:51:20.440328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:56.750 [2024-10-07 14:51:20.449560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:56.750 [2024-10-07 14:51:20.450284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:56.750 [2024-10-07 14:51:20.450331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:56.750 [2024-10-07 14:51:20.450348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:56.750 [2024-10-07 14:51:20.450615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:56.750 [2024-10-07 14:51:20.450856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:56.750 [2024-10-07 14:51:20.450869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:56.750 [2024-10-07 14:51:20.450880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:56.750 [2024-10-07 14:51:20.454612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.012 [2024-10-07 14:51:20.463639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.012 [2024-10-07 14:51:20.464783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.012 [2024-10-07 14:51:20.464825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.012 [2024-10-07 14:51:20.464842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.012 [2024-10-07 14:51:20.465121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.012 [2024-10-07 14:51:20.465363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.012 [2024-10-07 14:51:20.465377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.012 [2024-10-07 14:51:20.465388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.012 [2024-10-07 14:51:20.469138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.012 [2024-10-07 14:51:20.477745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.012 [2024-10-07 14:51:20.478444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.012 [2024-10-07 14:51:20.478492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.012 [2024-10-07 14:51:20.478508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.012 [2024-10-07 14:51:20.478776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.012 [2024-10-07 14:51:20.479023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.012 [2024-10-07 14:51:20.479037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.012 [2024-10-07 14:51:20.479048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.012 [2024-10-07 14:51:20.482777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.012 [2024-10-07 14:51:20.491794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.012 [2024-10-07 14:51:20.492352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.012 [2024-10-07 14:51:20.492377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.012 [2024-10-07 14:51:20.492394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.012 [2024-10-07 14:51:20.492630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.012 [2024-10-07 14:51:20.492866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.012 [2024-10-07 14:51:20.492878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.012 [2024-10-07 14:51:20.492888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.012 [2024-10-07 14:51:20.496609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.012 [2024-10-07 14:51:20.505835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.012 [2024-10-07 14:51:20.506443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.012 [2024-10-07 14:51:20.506466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.012 [2024-10-07 14:51:20.506477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.012 [2024-10-07 14:51:20.506712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.012 [2024-10-07 14:51:20.506948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.012 [2024-10-07 14:51:20.506961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.012 [2024-10-07 14:51:20.506970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.012 [2024-10-07 14:51:20.510734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.012 [2024-10-07 14:51:20.519974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.012 [2024-10-07 14:51:20.520582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.012 [2024-10-07 14:51:20.520606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.012 [2024-10-07 14:51:20.520617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.012 [2024-10-07 14:51:20.520851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.012 [2024-10-07 14:51:20.521091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.012 [2024-10-07 14:51:20.521104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.012 [2024-10-07 14:51:20.521113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.012 [2024-10-07 14:51:20.524834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.012 [2024-10-07 14:51:20.534080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.012 [2024-10-07 14:51:20.534746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.012 [2024-10-07 14:51:20.534794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.012 [2024-10-07 14:51:20.534810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.012 [2024-10-07 14:51:20.535085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.012 [2024-10-07 14:51:20.535332] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.012 [2024-10-07 14:51:20.535345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.012 [2024-10-07 14:51:20.535356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.012 [2024-10-07 14:51:20.539084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.012 [2024-10-07 14:51:20.548109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.012 [2024-10-07 14:51:20.548692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.012 [2024-10-07 14:51:20.548718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.012 [2024-10-07 14:51:20.548731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.012 [2024-10-07 14:51:20.548967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.012 [2024-10-07 14:51:20.549211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.012 [2024-10-07 14:51:20.549224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.012 [2024-10-07 14:51:20.549234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.012 [2024-10-07 14:51:20.552952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.012 [2024-10-07 14:51:20.562178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.012 [2024-10-07 14:51:20.562728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.012 [2024-10-07 14:51:20.562751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.012 [2024-10-07 14:51:20.562762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.012 [2024-10-07 14:51:20.562997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.012 [2024-10-07 14:51:20.563243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.012 [2024-10-07 14:51:20.563255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.012 [2024-10-07 14:51:20.563265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.012 [2024-10-07 14:51:20.566993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.012 [2024-10-07 14:51:20.576245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.012 [2024-10-07 14:51:20.576894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.012 [2024-10-07 14:51:20.576940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.012 [2024-10-07 14:51:20.576956] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.012 [2024-10-07 14:51:20.577230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.012 [2024-10-07 14:51:20.577471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.012 [2024-10-07 14:51:20.577486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.012 [2024-10-07 14:51:20.577497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.012 [2024-10-07 14:51:20.581234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.012 [2024-10-07 14:51:20.590254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.012 [2024-10-07 14:51:20.590825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.012 [2024-10-07 14:51:20.590850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.012 [2024-10-07 14:51:20.590862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.012 [2024-10-07 14:51:20.591103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.012 [2024-10-07 14:51:20.591340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.012 [2024-10-07 14:51:20.591353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.013 [2024-10-07 14:51:20.591363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.013 [2024-10-07 14:51:20.595087] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.013 [2024-10-07 14:51:20.604325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.013 [2024-10-07 14:51:20.604890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.013 [2024-10-07 14:51:20.604913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.013 [2024-10-07 14:51:20.604924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.013 [2024-10-07 14:51:20.605162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.013 [2024-10-07 14:51:20.605398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.013 [2024-10-07 14:51:20.605410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.013 [2024-10-07 14:51:20.605420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.013 [2024-10-07 14:51:20.609146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.013 [2024-10-07 14:51:20.618374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.013 [2024-10-07 14:51:20.618963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.013 [2024-10-07 14:51:20.618988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.013 [2024-10-07 14:51:20.619004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.013 [2024-10-07 14:51:20.619241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.013 [2024-10-07 14:51:20.619476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.013 [2024-10-07 14:51:20.619487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.013 [2024-10-07 14:51:20.619497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.013 [2024-10-07 14:51:20.623212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.013 [2024-10-07 14:51:20.632445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.013 [2024-10-07 14:51:20.633603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.013 [2024-10-07 14:51:20.633635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.013 [2024-10-07 14:51:20.633651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.013 [2024-10-07 14:51:20.633897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.013 [2024-10-07 14:51:20.634142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.013 [2024-10-07 14:51:20.634157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.013 [2024-10-07 14:51:20.634167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.013 [2024-10-07 14:51:20.637927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.013 [2024-10-07 14:51:20.646504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.013 [2024-10-07 14:51:20.647112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.013 [2024-10-07 14:51:20.647137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.013 [2024-10-07 14:51:20.647148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.013 [2024-10-07 14:51:20.647385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.013 [2024-10-07 14:51:20.647621] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.013 [2024-10-07 14:51:20.647634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.013 [2024-10-07 14:51:20.647644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.013 [2024-10-07 14:51:20.651388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.013 [2024-10-07 14:51:20.660617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.013 [2024-10-07 14:51:20.661317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.013 [2024-10-07 14:51:20.661365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.013 [2024-10-07 14:51:20.661380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.013 [2024-10-07 14:51:20.661647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.013 [2024-10-07 14:51:20.661888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.013 [2024-10-07 14:51:20.661902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.013 [2024-10-07 14:51:20.661912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.013 [2024-10-07 14:51:20.665640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.013 4344.00 IOPS, 16.97 MiB/s [2024-10-07T12:51:20.722Z] [2024-10-07 14:51:20.676189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.013 [2024-10-07 14:51:20.676895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.013 [2024-10-07 14:51:20.676942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.013 [2024-10-07 14:51:20.676958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.013 [2024-10-07 14:51:20.677233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.013 [2024-10-07 14:51:20.677478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.013 [2024-10-07 14:51:20.677492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.013 [2024-10-07 14:51:20.677503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.013 [2024-10-07 14:51:20.681232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.013 [2024-10-07 14:51:20.690249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.013 [2024-10-07 14:51:20.690830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.013 [2024-10-07 14:51:20.690855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.013 [2024-10-07 14:51:20.690867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.013 [2024-10-07 14:51:20.691108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.013 [2024-10-07 14:51:20.691345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.013 [2024-10-07 14:51:20.691357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.013 [2024-10-07 14:51:20.691367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.013 [2024-10-07 14:51:20.695091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.013 [2024-10-07 14:51:20.704318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.013 [2024-10-07 14:51:20.704932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.013 [2024-10-07 14:51:20.704955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.013 [2024-10-07 14:51:20.704966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.013 [2024-10-07 14:51:20.705205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.013 [2024-10-07 14:51:20.705441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.013 [2024-10-07 14:51:20.705453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.013 [2024-10-07 14:51:20.705463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.013 [2024-10-07 14:51:20.709193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.013 [2024-10-07 14:51:20.718421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.013 [2024-10-07 14:51:20.719083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.013 [2024-10-07 14:51:20.719131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.013 [2024-10-07 14:51:20.719148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.013 [2024-10-07 14:51:20.719418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.274 [2024-10-07 14:51:20.719658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.274 [2024-10-07 14:51:20.719673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.274 [2024-10-07 14:51:20.719684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.274 [2024-10-07 14:51:20.723420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.274 [2024-10-07 14:51:20.732445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.274 [2024-10-07 14:51:20.733056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.274 [2024-10-07 14:51:20.733090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.274 [2024-10-07 14:51:20.733102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.274 [2024-10-07 14:51:20.733346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.274 [2024-10-07 14:51:20.733581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.275 [2024-10-07 14:51:20.733594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.275 [2024-10-07 14:51:20.733604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.275 [2024-10-07 14:51:20.737327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.275 [2024-10-07 14:51:20.746555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.275 [2024-10-07 14:51:20.747125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.275 [2024-10-07 14:51:20.747172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.275 [2024-10-07 14:51:20.747189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.275 [2024-10-07 14:51:20.747459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.275 [2024-10-07 14:51:20.747699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.275 [2024-10-07 14:51:20.747712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.275 [2024-10-07 14:51:20.747724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.275 [2024-10-07 14:51:20.751457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.275 [2024-10-07 14:51:20.760690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.275 [2024-10-07 14:51:20.761372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.275 [2024-10-07 14:51:20.761420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.275 [2024-10-07 14:51:20.761436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.275 [2024-10-07 14:51:20.761702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.275 [2024-10-07 14:51:20.761942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.275 [2024-10-07 14:51:20.761956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.275 [2024-10-07 14:51:20.761966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.275 [2024-10-07 14:51:20.765700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.275 [2024-10-07 14:51:20.774765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.275 [2024-10-07 14:51:20.775417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.275 [2024-10-07 14:51:20.775448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.275 [2024-10-07 14:51:20.775461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.275 [2024-10-07 14:51:20.775697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.275 [2024-10-07 14:51:20.775933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.275 [2024-10-07 14:51:20.775946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.275 [2024-10-07 14:51:20.775957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.275 [2024-10-07 14:51:20.779680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.275 [2024-10-07 14:51:20.788909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.275 [2024-10-07 14:51:20.789493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.275 [2024-10-07 14:51:20.789518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.275 [2024-10-07 14:51:20.789529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.275 [2024-10-07 14:51:20.789764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.275 [2024-10-07 14:51:20.789998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.275 [2024-10-07 14:51:20.790018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.275 [2024-10-07 14:51:20.790027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.275 [2024-10-07 14:51:20.793744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.275 [2024-10-07 14:51:20.802975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.275 [2024-10-07 14:51:20.803810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.275 [2024-10-07 14:51:20.803856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.275 [2024-10-07 14:51:20.803872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.275 [2024-10-07 14:51:20.804148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.275 [2024-10-07 14:51:20.804390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.275 [2024-10-07 14:51:20.804404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.275 [2024-10-07 14:51:20.804416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.275 [2024-10-07 14:51:20.808211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.275 [2024-10-07 14:51:20.817017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.275 [2024-10-07 14:51:20.817547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.275 [2024-10-07 14:51:20.817573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.275 [2024-10-07 14:51:20.817585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.275 [2024-10-07 14:51:20.817821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.275 [2024-10-07 14:51:20.818069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.275 [2024-10-07 14:51:20.818082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.275 [2024-10-07 14:51:20.818092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.275 [2024-10-07 14:51:20.821810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.275 [2024-10-07 14:51:20.831047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.275 [2024-10-07 14:51:20.831618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.275 [2024-10-07 14:51:20.831641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.275 [2024-10-07 14:51:20.831652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.275 [2024-10-07 14:51:20.831887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.275 [2024-10-07 14:51:20.832126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.275 [2024-10-07 14:51:20.832141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.275 [2024-10-07 14:51:20.832151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3299414 Killed "${NVMF_APP[@]}" "$@" 00:40:57.275 [2024-10-07 14:51:20.835874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.275 14:51:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:40:57.275 14:51:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:40:57.275 14:51:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:57.275 14:51:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:57.275 14:51:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:57.275 [2024-10-07 14:51:20.845104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.275 14:51:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=3301342 00:40:57.275 14:51:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 3301342 00:40:57.275 [2024-10-07 14:51:20.845710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.275 [2024-10-07 14:51:20.845734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.275 [2024-10-07 14:51:20.845746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.275 14:51:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:40:57.275 [2024-10-07 14:51:20.845982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.275 14:51:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3301342 ']' 00:40:57.275 [2024-10-07 14:51:20.846222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.275 [2024-10-07 14:51:20.846235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.275 [2024-10-07 14:51:20.846245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.275 14:51:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:57.275 14:51:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:57.275 14:51:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:57.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:57.275 14:51:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:57.275 14:51:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:57.275 [2024-10-07 14:51:20.849965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.275 [2024-10-07 14:51:20.859200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.275 [2024-10-07 14:51:20.859763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.275 [2024-10-07 14:51:20.859785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.275 [2024-10-07 14:51:20.859797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.275 [2024-10-07 14:51:20.860039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.275 [2024-10-07 14:51:20.860275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.276 [2024-10-07 14:51:20.860288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.276 [2024-10-07 14:51:20.860298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.276 [2024-10-07 14:51:20.864020] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.276 [2024-10-07 14:51:20.873265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.276 [2024-10-07 14:51:20.873834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.276 [2024-10-07 14:51:20.873880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.276 [2024-10-07 14:51:20.873898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.276 [2024-10-07 14:51:20.874174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.276 [2024-10-07 14:51:20.874416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.276 [2024-10-07 14:51:20.874430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.276 [2024-10-07 14:51:20.874441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.276 [2024-10-07 14:51:20.878167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.276 [2024-10-07 14:51:20.887410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.276 [2024-10-07 14:51:20.888035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.276 [2024-10-07 14:51:20.888085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.276 [2024-10-07 14:51:20.888102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.276 [2024-10-07 14:51:20.888370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.276 [2024-10-07 14:51:20.888611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.276 [2024-10-07 14:51:20.888625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.276 [2024-10-07 14:51:20.888641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.276 [2024-10-07 14:51:20.892379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.276 [2024-10-07 14:51:20.901633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.276 [2024-10-07 14:51:20.902178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.276 [2024-10-07 14:51:20.902226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.276 [2024-10-07 14:51:20.902244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.276 [2024-10-07 14:51:20.902513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.276 [2024-10-07 14:51:20.902754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.276 [2024-10-07 14:51:20.902768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.276 [2024-10-07 14:51:20.902779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.276 [2024-10-07 14:51:20.906522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.276 [2024-10-07 14:51:20.915776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.276 [2024-10-07 14:51:20.916413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.276 [2024-10-07 14:51:20.916440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.276 [2024-10-07 14:51:20.916452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.276 [2024-10-07 14:51:20.916689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.276 [2024-10-07 14:51:20.916925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.276 [2024-10-07 14:51:20.916937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.276 [2024-10-07 14:51:20.916947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.276 [2024-10-07 14:51:20.920678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.276 [2024-10-07 14:51:20.928716] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:40:57.276 [2024-10-07 14:51:20.928813] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:57.276 [2024-10-07 14:51:20.929936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.276 [2024-10-07 14:51:20.930513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.276 [2024-10-07 14:51:20.930560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.276 [2024-10-07 14:51:20.930578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.276 [2024-10-07 14:51:20.930847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.276 [2024-10-07 14:51:20.931108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.276 [2024-10-07 14:51:20.931124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.276 [2024-10-07 14:51:20.931140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.276 [2024-10-07 14:51:20.934970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.276 [2024-10-07 14:51:20.944010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.276 [2024-10-07 14:51:20.944610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.276 [2024-10-07 14:51:20.944635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.276 [2024-10-07 14:51:20.944648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.276 [2024-10-07 14:51:20.944886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.276 [2024-10-07 14:51:20.945130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.276 [2024-10-07 14:51:20.945144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.276 [2024-10-07 14:51:20.945154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.276 [2024-10-07 14:51:20.948878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.276 [2024-10-07 14:51:20.958164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.276 [2024-10-07 14:51:20.958902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.276 [2024-10-07 14:51:20.958950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.276 [2024-10-07 14:51:20.958966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.276 [2024-10-07 14:51:20.959243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.276 [2024-10-07 14:51:20.959486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.276 [2024-10-07 14:51:20.959500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.276 [2024-10-07 14:51:20.959511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.276 [2024-10-07 14:51:20.963249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.276 [2024-10-07 14:51:20.972312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.276 [2024-10-07 14:51:20.972895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.276 [2024-10-07 14:51:20.972943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.276 [2024-10-07 14:51:20.972969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.276 [2024-10-07 14:51:20.973248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.276 [2024-10-07 14:51:20.973490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.276 [2024-10-07 14:51:20.973504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.276 [2024-10-07 14:51:20.973514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.276 [2024-10-07 14:51:20.977249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.537 [2024-10-07 14:51:20.986505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.537 [2024-10-07 14:51:20.987142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.537 [2024-10-07 14:51:20.987190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.537 [2024-10-07 14:51:20.987208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.537 [2024-10-07 14:51:20.987477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.537 [2024-10-07 14:51:20.987719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.537 [2024-10-07 14:51:20.987732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.537 [2024-10-07 14:51:20.987743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.537 [2024-10-07 14:51:20.991487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.537 [2024-10-07 14:51:21.000520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.537 [2024-10-07 14:51:21.001127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.537 [2024-10-07 14:51:21.001175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.537 [2024-10-07 14:51:21.001192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.537 [2024-10-07 14:51:21.001463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.537 [2024-10-07 14:51:21.001704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.537 [2024-10-07 14:51:21.001718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.537 [2024-10-07 14:51:21.001729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.537 [2024-10-07 14:51:21.005467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.537 [2024-10-07 14:51:21.014731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.537 [2024-10-07 14:51:21.015351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.537 [2024-10-07 14:51:21.015377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.537 [2024-10-07 14:51:21.015389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.537 [2024-10-07 14:51:21.015626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.537 [2024-10-07 14:51:21.015862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.537 [2024-10-07 14:51:21.015875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.537 [2024-10-07 14:51:21.015885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.537 [2024-10-07 14:51:21.019620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.537 [2024-10-07 14:51:21.028857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.537 [2024-10-07 14:51:21.029434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.537 [2024-10-07 14:51:21.029458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.537 [2024-10-07 14:51:21.029469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.537 [2024-10-07 14:51:21.029709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.537 [2024-10-07 14:51:21.029945] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.537 [2024-10-07 14:51:21.029957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.537 [2024-10-07 14:51:21.029967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.537 [2024-10-07 14:51:21.033711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.537 [2024-10-07 14:51:21.042938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.537 [2024-10-07 14:51:21.043635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.537 [2024-10-07 14:51:21.043684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.537 [2024-10-07 14:51:21.043701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.537 [2024-10-07 14:51:21.043970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.537 [2024-10-07 14:51:21.044220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.537 [2024-10-07 14:51:21.044234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.537 [2024-10-07 14:51:21.044245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.537 [2024-10-07 14:51:21.047972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.537 [2024-10-07 14:51:21.057005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.537 [2024-10-07 14:51:21.057741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.537 [2024-10-07 14:51:21.057789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.537 [2024-10-07 14:51:21.057805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.537 [2024-10-07 14:51:21.058083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.537 [2024-10-07 14:51:21.058325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.537 [2024-10-07 14:51:21.058339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.537 [2024-10-07 14:51:21.058350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.537 [2024-10-07 14:51:21.062078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.537 [2024-10-07 14:51:21.067673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:57.537 [2024-10-07 14:51:21.071119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.537 [2024-10-07 14:51:21.071747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.537 [2024-10-07 14:51:21.071772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.537 [2024-10-07 14:51:21.071785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.537 [2024-10-07 14:51:21.072029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.537 [2024-10-07 14:51:21.072266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.537 [2024-10-07 14:51:21.072283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.537 [2024-10-07 14:51:21.072295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.537 [2024-10-07 14:51:21.076021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.537 [2024-10-07 14:51:21.085262] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.537 [2024-10-07 14:51:21.085874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.537 [2024-10-07 14:51:21.085898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.537 [2024-10-07 14:51:21.085909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.537 [2024-10-07 14:51:21.086150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.537 [2024-10-07 14:51:21.086386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.537 [2024-10-07 14:51:21.086398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.537 [2024-10-07 14:51:21.086409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.537 [2024-10-07 14:51:21.090135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.537 [2024-10-07 14:51:21.099367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.537 [2024-10-07 14:51:21.099968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.537 [2024-10-07 14:51:21.099991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.537 [2024-10-07 14:51:21.100009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.537 [2024-10-07 14:51:21.100245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.537 [2024-10-07 14:51:21.100480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.537 [2024-10-07 14:51:21.100492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.537 [2024-10-07 14:51:21.100502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.537 [2024-10-07 14:51:21.104219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.537 [2024-10-07 14:51:21.113448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.537 [2024-10-07 14:51:21.114020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.537 [2024-10-07 14:51:21.114043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.537 [2024-10-07 14:51:21.114055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.537 [2024-10-07 14:51:21.114291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.537 [2024-10-07 14:51:21.114528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.537 [2024-10-07 14:51:21.114541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.537 [2024-10-07 14:51:21.114550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.537 [2024-10-07 14:51:21.118274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.538 [2024-10-07 14:51:21.127516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.538 [2024-10-07 14:51:21.128068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.538 [2024-10-07 14:51:21.128091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.538 [2024-10-07 14:51:21.128102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.538 [2024-10-07 14:51:21.128339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.538 [2024-10-07 14:51:21.128575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.538 [2024-10-07 14:51:21.128587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.538 [2024-10-07 14:51:21.128596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.538 [2024-10-07 14:51:21.132330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.538 [2024-10-07 14:51:21.141577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.538 [2024-10-07 14:51:21.142228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.538 [2024-10-07 14:51:21.142254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.538 [2024-10-07 14:51:21.142266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.538 [2024-10-07 14:51:21.142502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.538 [2024-10-07 14:51:21.142738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.538 [2024-10-07 14:51:21.142751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.538 [2024-10-07 14:51:21.142762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.538 [2024-10-07 14:51:21.146495] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.538 [2024-10-07 14:51:21.155753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.538 [2024-10-07 14:51:21.156433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.538 [2024-10-07 14:51:21.156483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.538 [2024-10-07 14:51:21.156500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.538 [2024-10-07 14:51:21.156772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.538 [2024-10-07 14:51:21.157024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.538 [2024-10-07 14:51:21.157039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.538 [2024-10-07 14:51:21.157050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.538 [2024-10-07 14:51:21.160783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.538 [2024-10-07 14:51:21.169796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.538 [2024-10-07 14:51:21.170274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.538 [2024-10-07 14:51:21.170300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.538 [2024-10-07 14:51:21.170318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.538 [2024-10-07 14:51:21.170560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.538 [2024-10-07 14:51:21.170803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.538 [2024-10-07 14:51:21.170816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.538 [2024-10-07 14:51:21.170826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.538 [2024-10-07 14:51:21.174573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.538 [2024-10-07 14:51:21.183806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.538 [2024-10-07 14:51:21.184472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.538 [2024-10-07 14:51:21.184520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.538 [2024-10-07 14:51:21.184536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.538 [2024-10-07 14:51:21.184805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.538 [2024-10-07 14:51:21.185059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.538 [2024-10-07 14:51:21.185074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.538 [2024-10-07 14:51:21.185085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.538 [2024-10-07 14:51:21.188817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.538 [2024-10-07 14:51:21.197837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.538 [2024-10-07 14:51:21.198322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.538 [2024-10-07 14:51:21.198348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.538 [2024-10-07 14:51:21.198360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.538 [2024-10-07 14:51:21.198598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.538 [2024-10-07 14:51:21.198834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.538 [2024-10-07 14:51:21.198846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.538 [2024-10-07 14:51:21.198857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.538 [2024-10-07 14:51:21.202586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.538 [2024-10-07 14:51:21.206405] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:57.538 [2024-10-07 14:51:21.206434] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:57.538 [2024-10-07 14:51:21.206442] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:57.538 [2024-10-07 14:51:21.206451] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:57.538 [2024-10-07 14:51:21.206457] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:57.538 [2024-10-07 14:51:21.207834] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:40:57.538 [2024-10-07 14:51:21.207950] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:57.538 [2024-10-07 14:51:21.207976] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:40:57.538 [2024-10-07 14:51:21.212114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.538 [2024-10-07 14:51:21.212812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.538 [2024-10-07 14:51:21.212860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.538 [2024-10-07 14:51:21.212876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.538 [2024-10-07 14:51:21.213156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.538 [2024-10-07 14:51:21.213398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.538 [2024-10-07 14:51:21.213412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.538 [2024-10-07 14:51:21.213423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.538 [2024-10-07 14:51:21.217159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.538 [2024-10-07 14:51:21.226191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.538 [2024-10-07 14:51:21.226792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.538 [2024-10-07 14:51:21.226840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.538 [2024-10-07 14:51:21.226858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.538 [2024-10-07 14:51:21.227138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.538 [2024-10-07 14:51:21.227381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.538 [2024-10-07 14:51:21.227395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.538 [2024-10-07 14:51:21.227406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.538 [2024-10-07 14:51:21.231141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.538 [2024-10-07 14:51:21.240398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.538 [2024-10-07 14:51:21.241097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.538 [2024-10-07 14:51:21.241146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.538 [2024-10-07 14:51:21.241161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.538 [2024-10-07 14:51:21.241430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.538 [2024-10-07 14:51:21.241671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.538 [2024-10-07 14:51:21.241685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.538 [2024-10-07 14:51:21.241696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.799 [2024-10-07 14:51:21.245435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.799 [2024-10-07 14:51:21.254467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.799 [2024-10-07 14:51:21.255124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.799 [2024-10-07 14:51:21.255152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.799 [2024-10-07 14:51:21.255169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.799 [2024-10-07 14:51:21.255408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.799 [2024-10-07 14:51:21.255645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.799 [2024-10-07 14:51:21.255658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.799 [2024-10-07 14:51:21.255668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.799 [2024-10-07 14:51:21.259399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.799 [2024-10-07 14:51:21.268648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.799 [2024-10-07 14:51:21.269369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.799 [2024-10-07 14:51:21.269418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.799 [2024-10-07 14:51:21.269434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.799 [2024-10-07 14:51:21.269703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.799 [2024-10-07 14:51:21.269947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.799 [2024-10-07 14:51:21.269961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.799 [2024-10-07 14:51:21.269972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.799 [2024-10-07 14:51:21.273734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.799 [2024-10-07 14:51:21.282765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.799 [2024-10-07 14:51:21.283366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.799 [2024-10-07 14:51:21.283414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.799 [2024-10-07 14:51:21.283432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.799 [2024-10-07 14:51:21.283700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.799 [2024-10-07 14:51:21.283941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.799 [2024-10-07 14:51:21.283955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.799 [2024-10-07 14:51:21.283967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.799 [2024-10-07 14:51:21.287705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.799 [2024-10-07 14:51:21.296949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.799 [2024-10-07 14:51:21.297641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.799 [2024-10-07 14:51:21.297689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.799 [2024-10-07 14:51:21.297705] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.799 [2024-10-07 14:51:21.297972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.799 [2024-10-07 14:51:21.298229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.799 [2024-10-07 14:51:21.298244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.799 [2024-10-07 14:51:21.298255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.799 [2024-10-07 14:51:21.301980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.799 [2024-10-07 14:51:21.311004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.799 [2024-10-07 14:51:21.311771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.799 [2024-10-07 14:51:21.311819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.799 [2024-10-07 14:51:21.311835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.799 [2024-10-07 14:51:21.312114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.799 [2024-10-07 14:51:21.312356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.799 [2024-10-07 14:51:21.312370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.799 [2024-10-07 14:51:21.312380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.799 [2024-10-07 14:51:21.316113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.799 [2024-10-07 14:51:21.325132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.799 [2024-10-07 14:51:21.325853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.800 [2024-10-07 14:51:21.325901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.800 [2024-10-07 14:51:21.325917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.800 [2024-10-07 14:51:21.326194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.800 [2024-10-07 14:51:21.326439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.800 [2024-10-07 14:51:21.326453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.800 [2024-10-07 14:51:21.326463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.800 [2024-10-07 14:51:21.330190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.800 [2024-10-07 14:51:21.339231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.800 [2024-10-07 14:51:21.339803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.800 [2024-10-07 14:51:21.339850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.800 [2024-10-07 14:51:21.339866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.800 [2024-10-07 14:51:21.340144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.800 [2024-10-07 14:51:21.340386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.800 [2024-10-07 14:51:21.340400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.800 [2024-10-07 14:51:21.340412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.800 [2024-10-07 14:51:21.344146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.800 [2024-10-07 14:51:21.353388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.800 [2024-10-07 14:51:21.353987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.800 [2024-10-07 14:51:21.354044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.800 [2024-10-07 14:51:21.354060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.800 [2024-10-07 14:51:21.354328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.800 [2024-10-07 14:51:21.354569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.800 [2024-10-07 14:51:21.354583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.800 [2024-10-07 14:51:21.354594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.800 [2024-10-07 14:51:21.358321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.800 [2024-10-07 14:51:21.367561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.800 [2024-10-07 14:51:21.368325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.800 [2024-10-07 14:51:21.368372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.800 [2024-10-07 14:51:21.368388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.800 [2024-10-07 14:51:21.368657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.800 [2024-10-07 14:51:21.368899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.800 [2024-10-07 14:51:21.368913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.800 [2024-10-07 14:51:21.368933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.800 [2024-10-07 14:51:21.372696] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.800 [2024-10-07 14:51:21.381713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.800 [2024-10-07 14:51:21.382425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.800 [2024-10-07 14:51:21.382473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.800 [2024-10-07 14:51:21.382489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.800 [2024-10-07 14:51:21.382757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.800 [2024-10-07 14:51:21.382999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.800 [2024-10-07 14:51:21.383022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.800 [2024-10-07 14:51:21.383033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.800 [2024-10-07 14:51:21.386761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.800 [2024-10-07 14:51:21.395772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.800 [2024-10-07 14:51:21.396474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.800 [2024-10-07 14:51:21.396522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.800 [2024-10-07 14:51:21.396543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.800 [2024-10-07 14:51:21.396811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.800 [2024-10-07 14:51:21.397062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.800 [2024-10-07 14:51:21.397077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.800 [2024-10-07 14:51:21.397088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.800 [2024-10-07 14:51:21.400812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.800 [2024-10-07 14:51:21.409824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.800 [2024-10-07 14:51:21.410525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.800 [2024-10-07 14:51:21.410573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.800 [2024-10-07 14:51:21.410589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.800 [2024-10-07 14:51:21.410858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.800 [2024-10-07 14:51:21.411109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.800 [2024-10-07 14:51:21.411124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.800 [2024-10-07 14:51:21.411136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.800 [2024-10-07 14:51:21.414863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.800 [2024-10-07 14:51:21.423887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.800 [2024-10-07 14:51:21.424518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.800 [2024-10-07 14:51:21.424544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.800 [2024-10-07 14:51:21.424556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.800 [2024-10-07 14:51:21.424793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.800 [2024-10-07 14:51:21.425034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.800 [2024-10-07 14:51:21.425048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.800 [2024-10-07 14:51:21.425058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.800 [2024-10-07 14:51:21.428778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.800 [2024-10-07 14:51:21.438018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.800 [2024-10-07 14:51:21.438629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.800 [2024-10-07 14:51:21.438653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.800 [2024-10-07 14:51:21.438664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.800 [2024-10-07 14:51:21.438900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.800 [2024-10-07 14:51:21.439147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.800 [2024-10-07 14:51:21.439161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.800 [2024-10-07 14:51:21.439171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.800 [2024-10-07 14:51:21.442889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.800 [2024-10-07 14:51:21.452116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.800 [2024-10-07 14:51:21.452533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.800 [2024-10-07 14:51:21.452556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.800 [2024-10-07 14:51:21.452567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.800 [2024-10-07 14:51:21.452801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.800 [2024-10-07 14:51:21.453041] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.800 [2024-10-07 14:51:21.453055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.800 [2024-10-07 14:51:21.453065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.800 [2024-10-07 14:51:21.456786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.800 [2024-10-07 14:51:21.466233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.800 [2024-10-07 14:51:21.466836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.800 [2024-10-07 14:51:21.466859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.800 [2024-10-07 14:51:21.466870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.800 [2024-10-07 14:51:21.467157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.800 [2024-10-07 14:51:21.467395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.801 [2024-10-07 14:51:21.467408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.801 [2024-10-07 14:51:21.467417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.801 [2024-10-07 14:51:21.471138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.801 [2024-10-07 14:51:21.480386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.801 [2024-10-07 14:51:21.481119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.801 [2024-10-07 14:51:21.481166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.801 [2024-10-07 14:51:21.481182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.801 [2024-10-07 14:51:21.481449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.801 [2024-10-07 14:51:21.481690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.801 [2024-10-07 14:51:21.481703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.801 [2024-10-07 14:51:21.481715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.801 [2024-10-07 14:51:21.485451] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:57.801 [2024-10-07 14:51:21.494460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:57.801 [2024-10-07 14:51:21.495219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:57.801 [2024-10-07 14:51:21.495267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:57.801 [2024-10-07 14:51:21.495283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:57.801 [2024-10-07 14:51:21.495550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:57.801 [2024-10-07 14:51:21.495791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:57.801 [2024-10-07 14:51:21.495805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:57.801 [2024-10-07 14:51:21.495816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:57.801 [2024-10-07 14:51:21.499551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.062 [2024-10-07 14:51:21.508565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.062 [2024-10-07 14:51:21.509262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.062 [2024-10-07 14:51:21.509310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.062 [2024-10-07 14:51:21.509326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.062 [2024-10-07 14:51:21.509595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.062 [2024-10-07 14:51:21.509836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.062 [2024-10-07 14:51:21.509850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.062 [2024-10-07 14:51:21.509860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.062 [2024-10-07 14:51:21.513592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.062 [2024-10-07 14:51:21.522610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.062 [2024-10-07 14:51:21.523313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.062 [2024-10-07 14:51:21.523361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.062 [2024-10-07 14:51:21.523376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.062 [2024-10-07 14:51:21.523644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.062 [2024-10-07 14:51:21.523884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.062 [2024-10-07 14:51:21.523897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.062 [2024-10-07 14:51:21.523908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.062 [2024-10-07 14:51:21.527645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.062 [2024-10-07 14:51:21.536671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.062 [2024-10-07 14:51:21.537409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.062 [2024-10-07 14:51:21.537461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.062 [2024-10-07 14:51:21.537477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.062 [2024-10-07 14:51:21.537744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.062 [2024-10-07 14:51:21.537984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.062 [2024-10-07 14:51:21.537997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.062 [2024-10-07 14:51:21.538017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.062 [2024-10-07 14:51:21.541742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.062 [2024-10-07 14:51:21.550765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.062 [2024-10-07 14:51:21.551420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.062 [2024-10-07 14:51:21.551445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.062 [2024-10-07 14:51:21.551456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.062 [2024-10-07 14:51:21.551693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.062 [2024-10-07 14:51:21.551927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.062 [2024-10-07 14:51:21.551939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.062 [2024-10-07 14:51:21.551949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.062 [2024-10-07 14:51:21.555673] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.062 [2024-10-07 14:51:21.564894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.062 [2024-10-07 14:51:21.565471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.062 [2024-10-07 14:51:21.565495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.062 [2024-10-07 14:51:21.565506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.062 [2024-10-07 14:51:21.565741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.062 [2024-10-07 14:51:21.565975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.063 [2024-10-07 14:51:21.565986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.063 [2024-10-07 14:51:21.565996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.063 [2024-10-07 14:51:21.569724] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.063 [2024-10-07 14:51:21.578996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.063 [2024-10-07 14:51:21.579601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.063 [2024-10-07 14:51:21.579624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.063 [2024-10-07 14:51:21.579635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.063 [2024-10-07 14:51:21.579870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.063 [2024-10-07 14:51:21.580117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.063 [2024-10-07 14:51:21.580129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.063 [2024-10-07 14:51:21.580139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.063 [2024-10-07 14:51:21.583855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.063 [2024-10-07 14:51:21.593085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.063 [2024-10-07 14:51:21.593815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.063 [2024-10-07 14:51:21.593862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.063 [2024-10-07 14:51:21.593878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.063 [2024-10-07 14:51:21.594153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.063 [2024-10-07 14:51:21.594394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.063 [2024-10-07 14:51:21.594407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.063 [2024-10-07 14:51:21.594418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.063 [2024-10-07 14:51:21.598143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.063 [2024-10-07 14:51:21.607159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.063 [2024-10-07 14:51:21.607840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.063 [2024-10-07 14:51:21.607887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.063 [2024-10-07 14:51:21.607902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.063 [2024-10-07 14:51:21.608178] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.063 [2024-10-07 14:51:21.608418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.063 [2024-10-07 14:51:21.608431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.063 [2024-10-07 14:51:21.608442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.063 [2024-10-07 14:51:21.612172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.063 [2024-10-07 14:51:21.621185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.063 [2024-10-07 14:51:21.621666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.063 [2024-10-07 14:51:21.621691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.063 [2024-10-07 14:51:21.621703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.063 [2024-10-07 14:51:21.621940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.063 [2024-10-07 14:51:21.622181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.063 [2024-10-07 14:51:21.622194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.063 [2024-10-07 14:51:21.622204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.063 [2024-10-07 14:51:21.625924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.063 [2024-10-07 14:51:21.635378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.063 [2024-10-07 14:51:21.635959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.063 [2024-10-07 14:51:21.635982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.063 [2024-10-07 14:51:21.635993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.063 [2024-10-07 14:51:21.636235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.063 [2024-10-07 14:51:21.636470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.063 [2024-10-07 14:51:21.636481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.063 [2024-10-07 14:51:21.636490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.063 [2024-10-07 14:51:21.640210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.063 [2024-10-07 14:51:21.649430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.063 [2024-10-07 14:51:21.650021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.063 [2024-10-07 14:51:21.650047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.063 [2024-10-07 14:51:21.650059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.063 [2024-10-07 14:51:21.650296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.063 [2024-10-07 14:51:21.650531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.063 [2024-10-07 14:51:21.650543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.063 [2024-10-07 14:51:21.650553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.063 [2024-10-07 14:51:21.654278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.063 [2024-10-07 14:51:21.663507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.063 [2024-10-07 14:51:21.664144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.063 [2024-10-07 14:51:21.664191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.063 [2024-10-07 14:51:21.664207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.063 [2024-10-07 14:51:21.664474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.063 [2024-10-07 14:51:21.664714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.063 [2024-10-07 14:51:21.664727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.063 [2024-10-07 14:51:21.664737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.063 [2024-10-07 14:51:21.668467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.063 3620.00 IOPS, 14.14 MiB/s [2024-10-07T12:51:21.772Z] [2024-10-07 14:51:21.678563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.063 [2024-10-07 14:51:21.679281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.063 [2024-10-07 14:51:21.679332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.063 [2024-10-07 14:51:21.679347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.063 [2024-10-07 14:51:21.679614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.063 [2024-10-07 14:51:21.679854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.063 [2024-10-07 14:51:21.679867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.063 [2024-10-07 14:51:21.679878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.063 [2024-10-07 14:51:21.683609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.063 [2024-10-07 14:51:21.692625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.063 [2024-10-07 14:51:21.693350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.063 [2024-10-07 14:51:21.693397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.063 [2024-10-07 14:51:21.693412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.063 [2024-10-07 14:51:21.693679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.063 [2024-10-07 14:51:21.693918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.063 [2024-10-07 14:51:21.693931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.063 [2024-10-07 14:51:21.693942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.063 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:58.063 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:40:58.063 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:58.063 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:58.063 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:58.063 [2024-10-07 14:51:21.697672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.063 [2024-10-07 14:51:21.706691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.063 [2024-10-07 14:51:21.707295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.063 [2024-10-07 14:51:21.707343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.063 [2024-10-07 14:51:21.707361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.063 [2024-10-07 14:51:21.707628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.063 [2024-10-07 14:51:21.707868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.063 [2024-10-07 14:51:21.707880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.064 [2024-10-07 14:51:21.707891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.064 [2024-10-07 14:51:21.711621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.064 [2024-10-07 14:51:21.720856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.064 [2024-10-07 14:51:21.721457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.064 [2024-10-07 14:51:21.721483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.064 [2024-10-07 14:51:21.721494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.064 [2024-10-07 14:51:21.721731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.064 [2024-10-07 14:51:21.721965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.064 [2024-10-07 14:51:21.721978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.064 [2024-10-07 14:51:21.721988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.064 [2024-10-07 14:51:21.725706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.064 [2024-10-07 14:51:21.734947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.064 [2024-10-07 14:51:21.735553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.064 [2024-10-07 14:51:21.735576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.064 [2024-10-07 14:51:21.735587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.064 [2024-10-07 14:51:21.735822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.064 [2024-10-07 14:51:21.736063] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.064 [2024-10-07 14:51:21.736075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.064 [2024-10-07 14:51:21.736085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.064 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:58.064 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:58.064 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.064 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:58.064 [2024-10-07 14:51:21.739804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.064 [2024-10-07 14:51:21.744498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:58.064 [2024-10-07 14:51:21.749029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.064 [2024-10-07 14:51:21.749662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.064 [2024-10-07 14:51:21.749686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.064 [2024-10-07 14:51:21.749697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.064 [2024-10-07 14:51:21.749932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.064 [2024-10-07 14:51:21.750172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.064 [2024-10-07 14:51:21.750185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.064 [2024-10-07 14:51:21.750194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.064 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.064 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:58.064 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.064 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:58.064 [2024-10-07 14:51:21.753911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.064 [2024-10-07 14:51:21.763133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.064 [2024-10-07 14:51:21.763856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.064 [2024-10-07 14:51:21.763903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.064 [2024-10-07 14:51:21.763919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.064 [2024-10-07 14:51:21.764197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.064 [2024-10-07 14:51:21.764439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.064 [2024-10-07 14:51:21.764452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.064 [2024-10-07 14:51:21.764462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.064 [2024-10-07 14:51:21.768195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.324 [2024-10-07 14:51:21.777240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.324 [2024-10-07 14:51:21.777976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.324 [2024-10-07 14:51:21.778038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.324 [2024-10-07 14:51:21.778053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.324 [2024-10-07 14:51:21.778322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.324 [2024-10-07 14:51:21.778562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.324 [2024-10-07 14:51:21.778575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.324 [2024-10-07 14:51:21.778586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.324 [2024-10-07 14:51:21.782319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.324 [2024-10-07 14:51:21.791339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.324 [2024-10-07 14:51:21.791955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.324 [2024-10-07 14:51:21.791980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.324 [2024-10-07 14:51:21.791992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.324 [2024-10-07 14:51:21.792236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.324 [2024-10-07 14:51:21.792472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.324 [2024-10-07 14:51:21.792483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.324 [2024-10-07 14:51:21.792493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.324 Malloc0 00:40:58.324 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.324 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:58.324 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.324 [2024-10-07 14:51:21.796209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.324 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:58.324 [2024-10-07 14:51:21.805654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.324 [2024-10-07 14:51:21.806367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.324 [2024-10-07 14:51:21.806414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.324 [2024-10-07 14:51:21.806430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.324 [2024-10-07 14:51:21.806698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.324 [2024-10-07 14:51:21.806939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.324 [2024-10-07 14:51:21.806952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.324 [2024-10-07 14:51:21.806963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.324 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.324 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:58.324 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.324 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:58.324 [2024-10-07 14:51:21.810700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.324 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.324 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:58.324 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:58.325 [2024-10-07 14:51:21.819720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.325 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:58.325 [2024-10-07 14:51:21.820466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:40:58.325 [2024-10-07 14:51:21.820513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039e200 with addr=10.0.0.2, port=4420 00:40:58.325 [2024-10-07 14:51:21.820529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:40:58.325 [2024-10-07 14:51:21.820796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:40:58.325 [2024-10-07 14:51:21.821043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:40:58.325 [2024-10-07 14:51:21.821057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:40:58.325 [2024-10-07 14:51:21.821069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:40:58.325 [2024-10-07 14:51:21.824800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:58.325 [2024-10-07 14:51:21.826301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:58.325 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:58.325 14:51:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3300102 00:40:58.325 [2024-10-07 14:51:21.833826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:40:58.325 [2024-10-07 14:51:21.877136] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:41:00.197 4297.71 IOPS, 16.79 MiB/s [2024-10-07T12:51:24.845Z] 5011.25 IOPS, 19.58 MiB/s [2024-10-07T12:51:25.782Z] 5566.33 IOPS, 21.74 MiB/s [2024-10-07T12:51:26.719Z] 6012.20 IOPS, 23.49 MiB/s [2024-10-07T12:51:28.098Z] 6375.55 IOPS, 24.90 MiB/s [2024-10-07T12:51:29.035Z] 6678.83 IOPS, 26.09 MiB/s [2024-10-07T12:51:29.971Z] 6938.00 IOPS, 27.10 MiB/s [2024-10-07T12:51:30.913Z] 7160.50 IOPS, 27.97 MiB/s [2024-10-07T12:51:30.913Z] 7339.47 IOPS, 28.67 MiB/s 00:41:07.204 Latency(us) 00:41:07.204 [2024-10-07T12:51:30.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:07.204 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:07.204 Verification LBA range: start 0x0 length 0x4000 00:41:07.204 Nvme1n1 : 15.01 7341.52 28.68 9285.15 0.00 7671.92 849.92 26214.40 00:41:07.204 [2024-10-07T12:51:30.913Z] =================================================================================================================== 00:41:07.204 [2024-10-07T12:51:30.913Z] Total : 7341.52 28.68 9285.15 0.00 7671.92 849.92 26214.40 00:41:07.774 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:41:07.774 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:07.774 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:07.774 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:07.774 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:07.774 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:41:07.774 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:41:07.774 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:07.774 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:41:07.774 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:07.774 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:41:07.774 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:07.774 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:07.774 rmmod nvme_tcp 00:41:07.774 rmmod nvme_fabrics 00:41:07.774 rmmod nvme_keyring 00:41:08.034 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:08.034 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:41:08.034 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:41:08.034 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 3301342 ']' 00:41:08.034 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 3301342 00:41:08.034 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3301342 ']' 00:41:08.034 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3301342 00:41:08.034 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:41:08.034 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:08.034 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3301342 00:41:08.034 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:08.034 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:08.034 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3301342' 00:41:08.034 killing process with pid 3301342 00:41:08.034 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3301342 00:41:08.034 14:51:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3301342 00:41:08.603 14:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:08.603 14:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:08.603 14:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:08.603 14:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:41:08.603 14:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:41:08.603 14:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:08.603 14:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:41:08.603 14:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:08.603 14:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:08.603 14:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:08.603 14:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:08.603 14:51:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:11.143 00:41:11.143 real 0m30.579s 00:41:11.143 user 1m11.444s 00:41:11.143 sys 0m7.746s 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:11.143 ************************************ 00:41:11.143 END TEST nvmf_bdevperf 00:41:11.143 ************************************ 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:41:11.143 ************************************ 00:41:11.143 START TEST nvmf_target_disconnect 00:41:11.143 ************************************ 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:41:11.143 * Looking for test storage... 00:41:11.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:11.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.143 --rc genhtml_branch_coverage=1 00:41:11.143 --rc genhtml_function_coverage=1 00:41:11.143 --rc genhtml_legend=1 00:41:11.143 --rc geninfo_all_blocks=1 00:41:11.143 --rc geninfo_unexecuted_blocks=1 00:41:11.143 00:41:11.143 ' 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:11.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.143 --rc genhtml_branch_coverage=1 00:41:11.143 --rc genhtml_function_coverage=1 00:41:11.143 --rc genhtml_legend=1 00:41:11.143 --rc geninfo_all_blocks=1 00:41:11.143 --rc geninfo_unexecuted_blocks=1 00:41:11.143 00:41:11.143 ' 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:11.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.143 --rc genhtml_branch_coverage=1 00:41:11.143 --rc genhtml_function_coverage=1 00:41:11.143 --rc genhtml_legend=1 00:41:11.143 --rc geninfo_all_blocks=1 00:41:11.143 --rc geninfo_unexecuted_blocks=1 00:41:11.143 00:41:11.143 ' 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:11.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.143 --rc genhtml_branch_coverage=1 00:41:11.143 --rc genhtml_function_coverage=1 00:41:11.143 --rc genhtml_legend=1 00:41:11.143 --rc geninfo_all_blocks=1 00:41:11.143 --rc geninfo_unexecuted_blocks=1 00:41:11.143 00:41:11.143 ' 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:11.143 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:11.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:41:11.144 14:51:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:19.279 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:19.279 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:19.279 Found net devices under 0000:31:00.0: cvl_0_0 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:19.279 Found net devices under 0000:31:00.1: cvl_0_1 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:19.279 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:19.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:19.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:41:19.280 00:41:19.280 --- 10.0.0.2 ping statistics --- 00:41:19.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:19.280 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:19.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:19.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:41:19.280 00:41:19.280 --- 10.0.0.1 ping statistics --- 00:41:19.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:19.280 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:19.280 14:51:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:41:19.280 ************************************ 00:41:19.280 START TEST nvmf_target_disconnect_tc1 00:41:19.280 ************************************ 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:19.280 [2024-10-07 14:51:42.229263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:19.280 [2024-10-07 14:51:42.229374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039df80 with addr=10.0.0.2, port=4420 00:41:19.280 [2024-10-07 14:51:42.229445] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:41:19.280 [2024-10-07 14:51:42.229469] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:41:19.280 [2024-10-07 14:51:42.229484] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:41:19.280 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:41:19.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:41:19.280 Initializing NVMe Controllers 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:41:19.280 00:41:19.280 real 0m0.225s 00:41:19.280 user 0m0.088s 00:41:19.280 sys 0m0.136s 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:41:19.280 ************************************ 00:41:19.280 END TEST nvmf_target_disconnect_tc1 00:41:19.280 ************************************ 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:41:19.280 ************************************ 00:41:19.280 START TEST nvmf_target_disconnect_tc2 00:41:19.280 ************************************ 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3307712 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3307712 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3307712 ']' 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:19.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:19.280 14:51:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:19.280 [2024-10-07 14:51:42.442109] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:41:19.280 [2024-10-07 14:51:42.442223] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:19.280 [2024-10-07 14:51:42.601398] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:19.280 [2024-10-07 14:51:42.832609] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:19.280 [2024-10-07 14:51:42.832659] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:19.281 [2024-10-07 14:51:42.832671] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:19.281 [2024-10-07 14:51:42.832683] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:19.281 [2024-10-07 14:51:42.832692] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:19.281 [2024-10-07 14:51:42.835293] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:41:19.281 [2024-10-07 14:51:42.835419] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:41:19.281 [2024-10-07 14:51:42.835518] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:41:19.281 [2024-10-07 14:51:42.835543] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:41:19.541 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:19.541 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:41:19.541 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:19.541 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:19.541 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:19.801 Malloc0 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:19.801 [2024-10-07 14:51:43.322565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:19.801 [2024-10-07 14:51:43.363246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3307899 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:41:19.801 14:51:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:41:21.799 14:51:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3307712 00:41:21.799 14:51:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 [2024-10-07 14:51:45.407777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Read completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.799 Write completed with error (sct=0, sc=8) 00:41:21.799 starting I/O failed 00:41:21.800 Read completed with error (sct=0, sc=8) 00:41:21.800 starting I/O failed 00:41:21.800 Write completed with error (sct=0, sc=8) 00:41:21.800 starting I/O failed 00:41:21.800 Read completed with error (sct=0, sc=8) 00:41:21.800 starting I/O failed 00:41:21.800 Write completed with error (sct=0, sc=8) 00:41:21.800 starting I/O failed 00:41:21.800 Write completed with error (sct=0, sc=8) 00:41:21.800 starting I/O failed 00:41:21.800 [2024-10-07 14:51:45.408278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:21.800 [2024-10-07 14:51:45.408663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.408693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.408866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.408882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.409245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.409292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.409540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.409557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.409796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.409810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.410027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.410045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.410456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.410470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.410675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.410689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.410885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.410898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.411240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.411254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.411569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.411583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.411944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.411959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.412301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.412316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.412426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.412439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.412740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.412755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.412949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.412963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.413157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.413171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.413501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.413515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.413823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.413837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.414087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.414101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.414409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.414423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.414630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.414644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.414904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.414918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.415236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.415250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.415484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.415498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.415815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.415829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.416010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.416025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.416401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.416415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.416714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.416727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.417073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.417087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.417325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.417339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.417653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.417667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.418009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.418024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.418370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.418384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.418697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.418711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.419012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.419026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.800 qpair failed and we were unable to recover it. 00:41:21.800 [2024-10-07 14:51:45.419344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.800 [2024-10-07 14:51:45.419358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.419574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.419590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.419866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.419880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.420089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.420105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.420397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.420411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.420685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.420698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.421021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.421035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.421217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.421231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.421396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.421410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.421689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.421703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.421994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.422014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.422255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.422269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.422597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.422611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.422909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.422922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.423121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.423136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.423459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.423473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.423729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.423743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.424076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.424090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.424379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.424393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.424752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.424766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.425108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.425123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.425428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.425442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.425777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.425791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.426096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.426110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.426294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.426308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.426674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.426687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.426946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.426959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.427251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.427265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.427559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.427572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.427884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.427898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.428236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.428251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.429034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.429049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.429341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.429356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.429653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.429667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.429869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.429884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.430232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.430246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.430595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.430610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.430951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.430964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.431290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.431305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.431488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.801 [2024-10-07 14:51:45.431502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.801 qpair failed and we were unable to recover it. 00:41:21.801 [2024-10-07 14:51:45.431823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.431837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.432042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.432061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.432377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.432392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.432730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.432743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.432922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.432937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.433251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.433265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.433580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.433594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.433906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.433919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.434226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.434241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.434529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.434542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.434789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.434803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.435142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.435156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.435499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.435512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.435799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.435812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.436112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.436126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.436410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.436424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.436745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.436759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.436890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.436903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.437081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.437097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.437393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.437409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.437725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.437738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.438025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.438039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.438348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.438361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.438533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.438548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.438909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.438922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.439282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.439296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.439650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.439664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.439982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.439995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.440284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.440299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.440593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.440607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.440890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.440904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.441129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.441142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.441437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.441450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.441750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.441764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.802 [2024-10-07 14:51:45.442079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.802 [2024-10-07 14:51:45.442094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.802 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.442274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.442287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.442581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.442595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.442857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.442870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.443172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.443187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.443491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.443504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.443766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.443779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.444077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.444093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.444512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.444526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.444702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.444716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.445038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.445052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.445352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.445365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.445704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.445718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.446037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.446051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.446370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.446383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.446697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.446710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.447010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.447025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.447357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.447370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.447682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.447695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.447988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.448014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.448327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.448342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.448721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.448734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.449134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.449148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.449448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.449461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.449773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.449787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.450090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.450104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.450292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.450305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.450661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.450674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.450959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.450973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.451282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.451296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.451594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.451607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.451941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.451954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.452314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.452328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.452641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.452654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.452868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.452882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.453222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.453236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.453542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.453555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.453886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.453900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.454239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.454253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.454557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.454570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.803 [2024-10-07 14:51:45.454773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.803 [2024-10-07 14:51:45.454789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.803 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.455090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.455104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.455405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.455419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.455708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.455721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.456075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.456089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.456456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.456469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.456814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.456827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.457146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.457162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.457445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.457458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.457745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.457759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.458088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.458103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.458390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.458412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.458739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.458753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.459046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.459059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.459383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.459395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.459675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.459689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.460042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.460056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.460262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.460277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.460612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.460625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.460956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.460969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.461257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.461271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.461636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.461649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.461957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.461970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.462151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.462165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.462487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.462500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.462701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.462715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.463089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.463103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.463524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.463537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.463828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.463842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.464168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.464181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.464525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.464539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.464852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.464866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.465153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.465167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.465460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.465473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.465786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.465799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.466016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.466030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.466321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.466335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.466553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.466566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.466766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.466779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.467100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.467114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.804 [2024-10-07 14:51:45.467515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.804 [2024-10-07 14:51:45.467528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.804 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.467702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.467716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.468050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.468063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.468265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.468278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.468627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.468640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.468925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.468939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.469190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.469205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.469508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.469525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.469855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.469868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.470079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.470094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.470415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.470429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.470736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.470749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.471076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.471090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.471401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.471414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.471710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.471723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.472038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.472051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.472368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.472382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.472662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.472675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.472975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.472988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.473263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.473276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.473461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.473475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.473838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.473852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.474173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.474187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.474526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.474539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.474918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.474931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.475240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.475254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.475614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.475627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.475971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.475984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.476253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.476267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.476601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.476614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.476898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.476912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.477234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.477248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.477438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.477454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.477765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.477778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.478066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.478087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.478383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.478397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.478707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.478720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.479027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.479041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.479351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.479366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.479625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.479638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.805 qpair failed and we were unable to recover it. 00:41:21.805 [2024-10-07 14:51:45.479950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.805 [2024-10-07 14:51:45.479964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.480295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.480309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.480623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.480636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.480957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.480970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.481337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.481351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.481670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.481684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.481995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.482019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.482322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.482338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.482663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.482677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.482982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.482995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.483317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.483330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.483649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.483663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.483906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.483921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.484234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.484248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.484571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.484584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.484865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.484879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.485216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.485230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.485511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.485525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.485837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.485850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.486170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.486184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.486494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.486508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.486830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.486844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.487188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.487202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.487601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.487614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.488009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.488024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.488365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.488378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.488658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.488670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.488981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.488995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.489285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.489299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.489494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.489507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.489826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.489840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.490066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.490080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.490267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.490281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.490573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.490587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.490913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.490926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.491306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.491320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.491544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.491558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.491926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.491940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.492260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.492274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.492583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.492596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.806 qpair failed and we were unable to recover it. 00:41:21.806 [2024-10-07 14:51:45.492935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.806 [2024-10-07 14:51:45.492949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.493265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.493279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.493593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.493607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.493826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.493839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.494149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.494163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.494509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.494522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.494735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.494749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.495066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.495082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.495412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.495426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.495763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.495776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.495992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.496009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.496261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.496275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.496584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.496597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.496821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.496834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.497122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.497135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.497472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.497485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.497798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.497812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.498139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.498153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.498478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.498492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.498865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.498878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.499213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.499234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.499552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.499565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.499881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.499895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.500218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.500231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.500516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.500530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.500835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.500848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.501171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.501185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.501542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.501556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.501875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.501890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.502288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.502302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.502598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.502611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.502925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.502940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.503204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.503218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:21.807 [2024-10-07 14:51:45.503519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:21.807 [2024-10-07 14:51:45.503532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:21.807 qpair failed and we were unable to recover it. 00:41:22.080 [2024-10-07 14:51:45.503844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.080 [2024-10-07 14:51:45.503866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.080 qpair failed and we were unable to recover it. 00:41:22.080 [2024-10-07 14:51:45.504170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.080 [2024-10-07 14:51:45.504184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.080 qpair failed and we were unable to recover it. 00:41:22.080 [2024-10-07 14:51:45.504529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.080 [2024-10-07 14:51:45.504544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.080 qpair failed and we were unable to recover it. 00:41:22.080 [2024-10-07 14:51:45.504866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.080 [2024-10-07 14:51:45.504880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.080 qpair failed and we were unable to recover it. 00:41:22.080 [2024-10-07 14:51:45.505168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.080 [2024-10-07 14:51:45.505183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.080 qpair failed and we were unable to recover it. 00:41:22.080 [2024-10-07 14:51:45.505521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.080 [2024-10-07 14:51:45.505535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.080 qpair failed and we were unable to recover it. 00:41:22.080 [2024-10-07 14:51:45.505852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.080 [2024-10-07 14:51:45.505872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.080 qpair failed and we were unable to recover it. 00:41:22.080 [2024-10-07 14:51:45.506168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.080 [2024-10-07 14:51:45.506182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.080 qpair failed and we were unable to recover it. 00:41:22.080 [2024-10-07 14:51:45.506496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.080 [2024-10-07 14:51:45.506510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.080 qpair failed and we were unable to recover it. 00:41:22.080 [2024-10-07 14:51:45.506827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.080 [2024-10-07 14:51:45.506840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.080 qpair failed and we were unable to recover it. 00:41:22.080 [2024-10-07 14:51:45.507137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.080 [2024-10-07 14:51:45.507152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.080 qpair failed and we were unable to recover it. 00:41:22.080 [2024-10-07 14:51:45.507346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.507361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.507679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.507692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.507868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.507885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.508195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.508209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.508522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.508535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.508726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.508741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.509027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.509041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.509347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.509361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.509642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.509655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.509856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.509869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.510191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.510205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.510520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.510533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.510817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.510831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.511084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.511098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.511388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.511401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.511704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.511718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.512036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.512051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.512366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.512379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.512669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.512682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.512866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.512879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.513171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.513185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.513482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.513495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.513883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.513896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.514212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.514226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.514411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.514424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.514723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.514736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.515072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.515086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.515312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.515326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.515666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.515679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.515996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.516013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.516276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.516290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.516601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.516615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.516926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.516939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.517242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.517256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.517465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.517479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.517821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.517835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.518158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.518172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.518490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.518504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.519171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.081 [2024-10-07 14:51:45.519195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.081 qpair failed and we were unable to recover it. 00:41:22.081 [2024-10-07 14:51:45.519494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.519510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.519837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.519851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.520177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.520192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.520382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.520395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.520728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.520750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.521103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.521117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.521440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.521454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.521782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.521796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.522006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.522020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.522298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.522311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.522514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.522527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.522797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.522811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.523123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.523137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.523456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.523470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.523831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.523845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.524148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.524161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.524476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.524490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.524814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.524828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.525216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.525231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.525557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.525571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.525897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.525911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.526217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.526231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.526427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.526442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.526769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.526782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.527106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.527121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.527446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.527460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.527774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.527788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.528100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.528114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.528399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.528413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.528722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.528735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.529079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.529095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.529404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.529418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.529740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.529753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.530090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.530103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.530427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.530441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.530748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.530761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.531092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.531106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.531528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.531541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.531735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.531748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.532049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.082 [2024-10-07 14:51:45.532063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.082 qpair failed and we were unable to recover it. 00:41:22.082 [2024-10-07 14:51:45.532375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.532389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.532718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.532731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.533066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.533080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.533414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.533427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.533738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.533752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.533971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.533985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.534276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.534289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.534605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.534619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.534996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.535014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.535331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.535344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.535661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.535680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.536015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.536029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.536332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.536346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.536658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.536671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.536999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.537017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.537337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.537351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.537694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.537708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.538039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.538053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.538437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.538450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.538773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.538786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.539113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.539127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.539429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.539443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.539747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.539760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.540097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.540119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.540433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.540446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.540761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.540775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.541111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.541124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.541425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.541439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.541747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.541761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.542067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.542081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.542417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.542432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.542689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.542702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.543022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.543036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.543348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.543361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.543678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.543692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.544010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.544024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.544417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.544431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.544646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.544659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.544893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.544906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.545192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.083 [2024-10-07 14:51:45.545206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.083 qpair failed and we were unable to recover it. 00:41:22.083 [2024-10-07 14:51:45.545435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.545450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.545757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.545770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.546088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.546101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.546430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.546444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.546778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.546792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.547121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.547135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.547449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.547463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.547763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.547776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.548057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.548071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.548374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.548387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.548717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.548730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.548933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.548947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.549267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.549281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.549626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.549640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.549968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.549981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.550290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.550304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.550623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.550637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.550851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.550865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.551167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.551181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.551469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.551483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.551785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.551799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.552079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.552093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.552391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.552404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.552712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.552725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.553050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.553064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.553264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.553277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.553601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.553614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.553930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.553943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.554081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.554096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.554410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.554423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.554600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.554617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.554800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.554814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.555174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.555189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.555472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.555486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.555817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.555830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.556122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.556136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.556404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.556418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.556751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.556764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.084 [2024-10-07 14:51:45.557076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.084 [2024-10-07 14:51:45.557090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.084 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.557404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.557417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.557741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.557755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.558061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.558075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.558387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.558400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.558720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.558733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.559070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.559084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.559401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.559414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.559700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.559715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.560102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.560116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.560399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.560413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.560739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.560753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.561067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.561081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.561383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.561396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.561683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.561697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.562025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.562039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.562330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.562343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.562684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.562697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.562880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.562896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.563287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.563301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.563548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.563561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.563886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.563899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.564072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.564088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.564403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.564416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.564726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.564739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.565051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.565065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.565373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.565387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.565715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.565728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.566043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.566057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.566418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.566432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.566713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.566727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.567051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.567065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.567430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.567447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.567778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.567791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.568104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.568119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.568423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.085 [2024-10-07 14:51:45.568437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.085 qpair failed and we were unable to recover it. 00:41:22.085 [2024-10-07 14:51:45.568836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.568849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.569141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.569155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.569491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.569505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.569820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.569834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.570146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.570160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.570486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.570500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.570815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.570829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.571140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.571153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.571447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.571461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.571767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.571786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.572109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.572125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.572454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.572468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.572798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.572811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.573075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.573089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.573380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.573394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.573592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.573607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.573944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.573957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.574200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.574214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.574535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.574548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.574938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.574952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.575275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.575288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.575505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.575518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.575834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.575849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.576044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.576058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.576359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.576372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.576681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.576694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.576980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.576995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.577323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.577336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.577651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.577666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.578007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.578021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.578329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.578343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.578730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.578743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.579030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.579045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.579337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.579351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.579672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.579686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.580025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.580039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.580338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.580355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.580663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.580677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.581016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.581031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.581360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.086 [2024-10-07 14:51:45.581373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.086 qpair failed and we were unable to recover it. 00:41:22.086 [2024-10-07 14:51:45.581657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.581671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.582012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.582025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.582351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.582365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.582675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.582688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.582910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.582923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.583091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.583106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.583431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.583445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.583735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.583749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.584074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.584088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.584408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.584422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.584610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.584626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.584950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.584964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.585277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.585291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.585489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.585504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.585820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.585834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.586150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.586165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.586504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.586517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.586830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.586844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.587164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.587179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.587516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.587531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.587857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.587870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.588191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.588205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.588503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.588517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.588828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.588841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.589179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.589192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.589535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.589549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.589768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.589781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.590111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.590124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.590434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.590447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.590758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.590772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.591075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.591090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.591389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.591403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.591706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.591719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.592036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.592050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.592438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.592452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.592741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.592754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.593065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.593081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.593394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.593407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.593729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.593744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.594067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.087 [2024-10-07 14:51:45.594081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.087 qpair failed and we were unable to recover it. 00:41:22.087 [2024-10-07 14:51:45.594467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.594480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.594801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.594814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.595159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.595174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.595504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.595518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.595825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.595838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.596228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.596242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.596460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.596473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.596833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.596847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.597147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.597161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.597450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.597463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.597770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.597784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.598099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.598113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.598448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.598462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.598687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.598700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.599019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.599033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.599326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.599340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.599688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.599702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.600038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.600052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.600357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.600372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.600556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.600571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.600972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.600985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.601354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.601369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.601689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.601703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.602038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.602052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.602278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.602292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.602608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.602623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.602947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.602962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.603183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.603197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.603493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.603506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.603839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.603853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.604074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.604089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.604264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.604278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.604656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.604670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.604960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.604974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.605167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.605181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.605436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.605449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.605764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.605780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.606103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.606117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.606495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.606509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.088 [2024-10-07 14:51:45.606769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.088 [2024-10-07 14:51:45.606783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.088 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.607126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.607141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.607440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.607454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.607775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.607789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.608102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.608116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.608439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.608453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.608786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.608799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.609081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.609095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.609381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.609394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.609692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.609705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.610076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.610091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.610432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.610446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.610750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.610764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.611068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.611082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.611477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.611490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.611823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.611836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.612167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.612181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.612506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.612520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.612846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.612859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.613149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.613163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.613471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.613485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.613689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.613704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.614024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.614038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.614406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.614421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.614756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.614771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.615050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.615064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.615253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.615267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.615484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.615498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.615782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.615796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.615974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.615988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.616277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.616291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.616521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.616536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.616855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.089 [2024-10-07 14:51:45.616869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.089 qpair failed and we were unable to recover it. 00:41:22.089 [2024-10-07 14:51:45.617177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.617191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.617509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.617524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.617831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.617845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.618178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.618192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.618504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.618520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.618804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.618818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.619137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.619152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.619470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.619484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.619815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.619830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.620144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.620157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.620460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.620474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.620787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.620800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.621137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.621152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.621458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.621471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.621729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.621743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.622066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.622080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.622385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.622399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.622689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.622702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.622987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.623005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.623299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.623312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.623542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.623555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.623861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.623874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.624180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.624195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.624550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.624563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.624869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.624884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.625172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.625187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.625527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.625542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.625730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.625745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.625970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.625984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.626286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.626301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.626638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.626653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.626970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.626985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.627302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.627317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.627620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.627634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.627816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.627830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.628166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.628180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.628466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.628480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.628812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.628825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.629129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.629143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.629327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.629341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.090 qpair failed and we were unable to recover it. 00:41:22.090 [2024-10-07 14:51:45.629550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.090 [2024-10-07 14:51:45.629564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.629881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.629894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.630112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.630126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.630335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.630349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.630678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.630694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.631018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.631032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.631222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.631238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.631555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.631568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.631901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.631915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.632216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.632230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.632405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.632420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.632854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.632867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.633170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.633184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.633499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.633512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.633841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.633855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.634154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.634168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.634389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.634402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.634743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.634756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.635071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.635086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.635389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.635403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.635786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.635799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.636123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.636137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.636369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.636382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.636710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.636724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.637022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.637036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.637360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.637374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.637703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.637716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.638029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.638043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.638415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.638429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.638644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.638658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.638972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.638988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.639280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.639295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.639618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.639633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.639995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.640016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.640246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.640259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.640558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.640572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.640879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.640892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.641220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.641235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.641580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.641594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.641926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.641938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.091 qpair failed and we were unable to recover it. 00:41:22.091 [2024-10-07 14:51:45.642343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.091 [2024-10-07 14:51:45.642356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.642693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.642706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.643025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.643039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.643423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.643437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.643742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.643758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.644159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.644173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.644471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.644485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.644821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.644834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.645149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.645163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.645395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.645409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.645752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.645766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.646066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.646080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.646396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.646409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.646721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.646734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.647053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.647068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.647393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.647406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.647743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.647756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.648073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.648087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.648411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.648424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.648607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.648622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.648945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.648958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.649255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.649269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.649600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.649613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.649928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.649942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.650041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.650055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.650357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.650370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.650688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.650702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.650976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.650990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.651245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.651259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.651581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.651594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.651912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.651926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.652248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.652262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.652576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.652590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.652898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.652911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.653218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.653232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.653544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.653557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.653851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.653864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.654251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.654265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.654480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.654493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.654689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.654704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.092 qpair failed and we were unable to recover it. 00:41:22.092 [2024-10-07 14:51:45.654977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.092 [2024-10-07 14:51:45.654990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.655310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.655324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.655676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.655690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.655991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.656007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.656289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.656307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.656577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.656590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.656904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.656926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.657121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.657135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.657419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.657433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.657719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.657732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.658063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.658085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.658373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.658386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.658692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.658705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.658912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.658925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.659274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.659288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.659458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.659474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.659810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.659824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.660150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.660165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.660495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.660508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.660823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.660844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.661168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.661182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.661482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.661495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.661804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.661819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.662111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.662125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.662314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.662328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.662498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.662513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.662833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.662847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.663183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.663198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.663578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.663591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.663875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.663896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.664215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.664230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.664535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.664549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.093 [2024-10-07 14:51:45.664870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.093 [2024-10-07 14:51:45.664883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.093 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.665224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.665245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.665563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.665577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.665894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.665908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.666301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.666315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.666604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.666619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.666817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.666831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.667175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.667189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.667510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.667531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.667742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.667756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.668074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.668088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.668398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.668413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.668733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.668749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.669087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.669101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.669479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.669493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.669818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.669831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.670169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.670183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.670496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.670509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.670841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.670856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.671142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.671156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.671486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.671499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.671782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.671795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.672143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.672157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.672488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.672502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.672813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.672828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.673127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.673141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.673428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.673441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.673760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.673774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.674062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.674076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.674408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.674422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.674705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.674719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.675056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.675071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.675386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.675404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.675727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.675741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.676119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.676133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.676362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.676376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.676732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.676745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.677097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.677112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.677451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.677464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.677782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.677801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.094 [2024-10-07 14:51:45.678141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.094 [2024-10-07 14:51:45.678155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.094 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.678464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.678478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.678661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.678674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.678986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.679003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.679332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.679346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.679694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.679709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.680055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.680069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.680384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.680397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.680710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.680723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.681021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.681038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.681366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.681380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.681696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.681710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.682055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.682070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.682364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.682378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.682703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.682717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.683014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.683028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.683318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.683332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.683558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.683571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.683925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.683939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.684270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.684284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.684634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.684649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.684991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.685010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.685372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.685386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.685697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.685710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.686091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.686106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.686426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.686440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.686773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.686787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.687121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.687136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.687431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.687444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.687762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.687775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.688756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.688784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.689117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.689133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.689462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.689476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.689817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.689839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.690164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.690179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.690502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.690516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.690729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.690744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.691070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.691084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.691411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.691426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.691717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.691733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.095 [2024-10-07 14:51:45.691875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.095 [2024-10-07 14:51:45.691890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.095 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.692096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.692111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.692483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.692497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.692819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.692833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.693153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.693167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.693473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.693487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.693821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.693836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.694165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.694180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.694467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.694480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.694848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.694861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.695154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.695168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.695387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.695401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.695635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.695650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.695946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.695960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.696309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.696323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.696628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.696644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.696861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.696875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.697296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.697310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.697671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.697686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.698057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.698072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.698357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.698371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.698682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.698696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.699060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.699075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.699285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.699298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.699629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.699642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.699965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.699979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.700292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.700308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.700510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.700525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.700824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.700838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.701166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.701180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.701412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.701426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.701750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.701764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.702096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.702112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.702433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.702447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.702773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.702788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.703114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.703129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.703444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.703458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.703838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.703851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.704176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.704190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.704396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.704415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.096 [2024-10-07 14:51:45.704682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.096 [2024-10-07 14:51:45.704696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.096 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.704935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.704948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.705339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.705354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.705665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.705679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.705999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.706017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.706318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.706332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.706670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.706684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.707007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.707021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.707319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.707333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.707613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.707626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.708023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.708038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.708387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.708401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.708713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.708726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.708902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.708916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.709138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.709153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.709482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.709496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.709821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.709834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.710160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.710175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.710466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.710480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.710615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.710631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.710930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.710943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.711250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.711264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.711585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.711600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.711933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.711947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.712273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.712287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.712589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.712609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.712810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.712825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.713026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.713042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.713272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.713285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.713612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.713625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.713951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.713965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.714274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.714287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.714687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.714701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.715011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.715025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.715358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.715372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.715709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.715723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.715920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.715933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.716239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.716253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.716563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.716577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.097 qpair failed and we were unable to recover it. 00:41:22.097 [2024-10-07 14:51:45.716813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.097 [2024-10-07 14:51:45.716832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.717174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.717189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.717511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.717525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.717798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.717812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.718017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.718032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.718284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.718298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.718544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.718559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.718756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.718770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.719169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.719183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.719500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.719514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.719720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.719734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.720075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.720090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.720282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.720295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.720597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.720611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.720832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.720846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.721052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.721066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.721273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.721286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.721677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.721694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.721871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.721887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.722089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.722104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.722413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.722426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.722729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.722743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.723071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.723085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.723406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.723420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.723748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.723762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.724040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.724054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.724315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.724329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.724649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.724663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.725104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.725118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.725404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.725418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.725754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.725768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.726028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.726043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.098 qpair failed and we were unable to recover it. 00:41:22.098 [2024-10-07 14:51:45.726337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.098 [2024-10-07 14:51:45.726351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.726665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.726679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.727020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.727035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.727361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.727375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.727697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.727710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.728024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.728039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.728342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.728356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.728725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.728738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.729076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.729093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.729416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.729429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.729738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.729752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.730017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.730032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.730356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.730369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.730674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.730688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.730882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.730895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.731234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.731248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.731448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.731461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.731785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.731798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.732113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.732127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.732442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.732455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.732793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.732807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.732991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.733009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.733225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.733238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.733453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.733466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.733821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.733835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.734050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.734063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.734362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.734376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.734565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.734579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.734963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.734977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.735310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.735323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.735522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.735536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.735905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.735919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.736253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.736268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.736662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.736675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.736997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.737015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.737376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.737390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.737584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.737597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.737918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.737931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.738289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.738302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.738535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.099 [2024-10-07 14:51:45.738548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.099 qpair failed and we were unable to recover it. 00:41:22.099 [2024-10-07 14:51:45.738864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.738877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.739185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.739205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.739522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.739535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.739890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.739903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.739987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.740003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.740365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.740378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.740726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.740739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.741057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.741070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.741398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.741414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.741762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.741776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.742022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.742036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.742330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.742343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.742698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.742711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.743006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.743021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.743348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.743361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.743655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.743669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.743998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.744016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.744295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.744308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.744617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.744631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.744850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.744864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.745187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.745201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.745497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.745511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.745841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.745855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.746056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.746070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.746399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.746412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.746791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.746804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.747089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.747103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.747434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.747447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.747750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.747763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.747966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.747979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.748297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.748311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.748600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.748614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.748929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.748942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.749293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.749306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.749617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.749630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.749961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.749975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.750358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.750372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.750724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.750737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.751049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.751062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.751358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.100 [2024-10-07 14:51:45.751377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.100 qpair failed and we were unable to recover it. 00:41:22.100 [2024-10-07 14:51:45.751712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.751724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.751948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.751961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.752293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.752307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.752634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.752649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.752975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.752989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.753364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.753378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.753705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.753718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.754046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.754060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.754472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.754490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.754792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.754806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.755134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.755148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.755344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.755358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.755670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.755683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.756037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.756050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.756333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.756346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.756555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.756568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.756708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.756723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.757037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.757050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.757256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.757270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.757493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.757506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.757874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.757887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.758277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.758291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.758625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.758639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.758948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.758961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.759252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.759265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.759610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.759624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.759910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.759924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.760247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.760261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.760578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.760592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.760935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.760948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.761249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.761263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.761582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.761596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.761795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.761808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.762127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.762145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.762444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.762458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.762783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.762797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.763146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.763160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.763462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.763483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.763803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.763816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.764139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.764154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.101 qpair failed and we were unable to recover it. 00:41:22.101 [2024-10-07 14:51:45.764466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.101 [2024-10-07 14:51:45.764479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.764817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.764830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.765238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.765252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.765538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.765552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.765854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.765866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.766159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.766175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.766360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.766375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.766743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.766756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.767067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.767083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.767427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.767440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.767758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.767779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.768070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.768084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.768400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.768413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.768744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.768757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.769068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.769081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.769389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.769402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.769793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.769807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.770114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.770128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.770427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.770441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.770763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.770776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.771095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.771109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.771407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.771420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.771683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.771696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.772010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.772024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.772353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.772367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.772702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.772716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.773044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.773058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.773347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.773360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.773698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.773711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.773918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.773931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.774223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.774237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.774557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.774570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.774901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.774914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.775308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.102 [2024-10-07 14:51:45.775323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.102 qpair failed and we were unable to recover it. 00:41:22.102 [2024-10-07 14:51:45.775644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.103 [2024-10-07 14:51:45.775658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.103 qpair failed and we were unable to recover it. 00:41:22.103 [2024-10-07 14:51:45.775967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.103 [2024-10-07 14:51:45.775980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.103 qpair failed and we were unable to recover it. 00:41:22.103 [2024-10-07 14:51:45.776299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.103 [2024-10-07 14:51:45.776313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.103 qpair failed and we were unable to recover it. 00:41:22.103 [2024-10-07 14:51:45.776498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.103 [2024-10-07 14:51:45.776513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.103 qpair failed and we were unable to recover it. 00:41:22.103 [2024-10-07 14:51:45.776804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.103 [2024-10-07 14:51:45.776817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.103 qpair failed and we were unable to recover it. 00:41:22.103 [2024-10-07 14:51:45.777140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.103 [2024-10-07 14:51:45.777155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.103 qpair failed and we were unable to recover it. 00:41:22.376 [2024-10-07 14:51:45.777439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.376 [2024-10-07 14:51:45.777454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.376 qpair failed and we were unable to recover it. 00:41:22.376 [2024-10-07 14:51:45.777785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.376 [2024-10-07 14:51:45.777798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.376 qpair failed and we were unable to recover it. 00:41:22.376 [2024-10-07 14:51:45.778054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.376 [2024-10-07 14:51:45.778069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.376 qpair failed and we were unable to recover it. 00:41:22.376 [2024-10-07 14:51:45.778386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.376 [2024-10-07 14:51:45.778400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.376 qpair failed and we were unable to recover it. 00:41:22.376 [2024-10-07 14:51:45.778720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.376 [2024-10-07 14:51:45.778733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.376 qpair failed and we were unable to recover it. 00:41:22.376 [2024-10-07 14:51:45.779069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.376 [2024-10-07 14:51:45.779083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.376 qpair failed and we were unable to recover it. 00:41:22.376 [2024-10-07 14:51:45.779301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.779315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.779530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.779550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.780533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.780565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.780904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.780921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.781225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.781240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.781539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.781553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.781880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.781894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.782074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.782090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.782440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.782453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.782769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.782783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.783113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.783127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.783419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.783432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.783742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.783755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.784067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.784081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.784435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.784449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.784841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.784855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.785187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.785201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.785401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.785416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.785733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.785746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.786052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.786066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.786359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.786373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.786672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.786685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.787002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.787016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.787411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.787426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.787720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.787734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.787946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.787959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.788140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.788153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.788537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.788550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.788841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.788855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.789201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.789215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.789524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.789538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.789762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.789775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.790007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.790021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.790331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.790345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.790660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.790673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.791009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.791024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.791336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.791350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.791667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.791682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.377 [2024-10-07 14:51:45.791973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.377 [2024-10-07 14:51:45.791987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.377 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.792310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.792326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.792636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.792649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.792965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.792979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.793303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.793319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.793707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.793721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.794028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.794042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.794344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.794359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.794676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.794689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.794983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.794996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.795314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.795327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.795649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.795663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.795963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.795976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.796200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.796213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.796524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.796539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.796864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.796878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.797210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.797224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.797444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.797457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.797786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.797799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.798115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.798128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.798446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.798460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.798846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.798860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.799198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.799212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.799526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.799539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.799860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.799874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.800224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.800237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.800558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.800572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.800872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.800885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.801110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.801124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.801446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.801461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.801774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.801787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.802104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.802118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.802507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.802520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.802846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.802859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.803271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.803290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.803573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.803591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.803932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.803945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.804157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.804172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.804479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.804493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.378 [2024-10-07 14:51:45.804799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.378 [2024-10-07 14:51:45.804813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.378 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.805126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.805140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.805447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.805461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.805776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.805790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.806077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.806091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.806449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.806467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.806741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.806755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.807071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.807085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.807412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.807425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.807763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.807778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.808106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.808121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.808432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.808445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.808766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.808780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.809093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.809107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.809441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.809454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.809782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.809795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.810114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.810128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.810477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.810490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.810827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.810841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.811174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.811188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.811398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.811411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.811734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.811747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.812045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.812059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.812376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.812390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.812789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.812803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.813052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.813066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.813363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.813376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.813685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.813698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.814014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.814028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.814338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.814352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.814571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.814584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.814881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.814895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.815110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.815125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.815461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.815474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.815815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.815828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.816128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.816142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.816477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.816491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.816804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.816818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.817027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.817041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.379 qpair failed and we were unable to recover it. 00:41:22.379 [2024-10-07 14:51:45.817361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.379 [2024-10-07 14:51:45.817374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.817598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.817612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.817934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.817947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.818269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.818283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.818596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.818609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.818933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.818946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.819274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.819292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.819624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.819638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.819966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.819979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.820280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.820294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.820626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.820639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.820971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.820985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.821205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.821218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.821538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.821552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.821908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.821922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.822307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.822322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.822607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.822621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.822949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.822963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.823233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.823247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.823555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.823570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.823947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.823961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.824281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.824295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.824621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.824634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.824972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.824985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.825307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.825322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.825656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.825670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.826015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.826030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.826361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.826375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.826691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.826705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.827016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.827030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.827341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.827354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.827680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.827692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.827994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.828013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.828322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.828336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.828648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.828662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.829011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.829024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.829202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.829217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.829492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.829505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.829677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.829692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.830070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.380 [2024-10-07 14:51:45.830084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.380 qpair failed and we were unable to recover it. 00:41:22.380 [2024-10-07 14:51:45.830380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.830394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.830705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.830719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.830936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.830949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.831294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.831308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.831625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.831638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.831948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.831961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.832350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.832367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.832654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.832668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.832884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.832897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.833246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.833261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.833578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.833591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.833908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.833921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.834239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.834253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.834565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.834579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.834892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.834906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.835238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.835252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.835568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.835582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.835897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.835911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.836247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.836261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.836598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.836611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.836905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.836920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.837246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.837261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.837563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.837577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.837937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.837950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.838247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.838262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.838588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.838602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.838929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.838944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.839250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.839265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.839598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.839612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.839934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.839948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.840275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.840290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.840627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.381 [2024-10-07 14:51:45.840641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.381 qpair failed and we were unable to recover it. 00:41:22.381 [2024-10-07 14:51:45.840968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.840983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.841294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.841309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.841638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.841652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.841983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.841997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.842322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.842337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.842692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.842706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.842907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.842921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.843253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.843268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.843591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.843605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.843938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.843952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.844289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.844303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.844606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.844620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.844955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.844969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.845184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.845199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.845550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.845567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.845780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.845794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.846116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.846130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.846441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.846454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.846802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.846815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.847147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.847162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.847478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.847491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.847807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.847821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.848148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.848162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.848498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.848511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.848819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.848833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.849148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.849170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.849485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.849499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.849830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.849844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.850178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.850192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.850506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.850519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.850829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.850842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.851215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.851230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.851407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.851422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.851636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.851650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.852037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.852051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.852362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.852375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.852743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.852756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.853068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.853083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.853419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.853432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.382 [2024-10-07 14:51:45.853769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.382 [2024-10-07 14:51:45.853783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.382 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.854099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.854114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.854427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.854443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.854758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.854772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.855115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.855129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.855451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.855466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.855780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.855793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.856076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.856090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.856425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.856438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.856614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.856629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.856975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.856988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.857246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.857260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.857599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.857613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.857909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.857922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.858257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.858271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.858541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.858554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.858848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.858861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.859170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.859184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.859524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.859537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.859753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.859766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.860008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.860021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.860356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.860370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.860704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.860717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.861034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.861048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.861200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.861214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.861494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.861507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.861825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.861839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.862153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.862168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.862468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.862481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.862827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.862842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.863145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.863159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.863479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.863501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.863866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.863880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.864201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.864215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.864506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.864520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.864848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.383 [2024-10-07 14:51:45.864861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.383 qpair failed and we were unable to recover it. 00:41:22.383 [2024-10-07 14:51:45.865243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.865257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.865593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.865606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.865916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.865929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.866229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.866244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.866567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.866580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.866894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.866908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.867238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.867254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.867570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.867584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.867806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.867819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.867920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.867933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Write completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Write completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Write completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Write completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 Read completed with error (sct=0, sc=8) 00:41:22.384 starting I/O failed 00:41:22.384 [2024-10-07 14:51:45.869131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:41:22.384 [2024-10-07 14:51:45.869612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.869673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.870073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.870133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.870502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.870518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.870852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.870866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.871080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.871094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.871308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.871321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.871666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.871679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.872015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.872030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.872352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.872366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.872654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.872668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.872973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.872986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.873320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.873334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.873651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.873665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.873980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.873994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.874318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.874331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.874615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.384 [2024-10-07 14:51:45.874628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.384 qpair failed and we were unable to recover it. 00:41:22.384 [2024-10-07 14:51:45.874986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.875004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.875312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.875325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.875603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.875617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.875931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.875944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.876340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.876354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.876653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.876668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.876978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.876991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.877319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.877333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.877665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.877679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.877892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.877906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.878227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.878241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.878569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.878582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.878895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.878909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.879232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.879248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.879438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.879453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.879667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.879680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.879995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.880018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.880339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.880353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.880670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.880693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.881011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.881025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.881348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.881362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.881696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.881709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.882025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.882039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.882362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.882376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.882692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.882706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.883036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.883050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.883422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.883436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.883732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.883746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.884131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.884145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.884442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.884456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.884765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.884778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.885094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.885108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.885434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.885447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.885780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.885794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.385 [2024-10-07 14:51:45.886123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.385 [2024-10-07 14:51:45.886137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.385 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.886522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.886536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.886861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.886875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.887204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.887218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.887486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.887499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.887816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.887829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.888168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.888181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.888458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.888471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.888803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.888816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.888991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.889010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.889322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.889336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.889634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.889648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.889877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.889891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.890048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.890063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.890343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.890358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.890711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.890725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.891032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.891047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.891335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.891348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.891663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.891676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.891972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.891988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.892297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.892311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.892617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.892630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.892947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.892961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.893294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.893308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.893624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.893637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.894009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.894023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.894351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.894365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.894754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.894767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.895072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.895086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.895436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.895450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.895765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.895779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.896071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.896084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.896395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.896408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.896759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.896772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.897059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.897080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.897419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.897433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.897819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.897832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.898115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.898128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.898450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.898463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.386 [2024-10-07 14:51:45.898795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.386 [2024-10-07 14:51:45.898809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.386 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.899092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.899105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.899428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.899442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.899758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.899771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.900097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.900112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.900425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.900438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.900757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.900777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.901093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.901107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.901439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.901453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.901769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.901783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.902135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.902150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.902476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.902489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.902816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.902830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.903032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.903046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.903231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.903247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.903567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.903581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.903921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.903935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.904335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.904349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.904562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.904576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.904898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.904911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.905143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.905157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.905471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.905485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.905807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.905821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.906008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.906022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.906324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.906338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.906670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.906683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.906995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.907014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.907338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.907353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.907653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.907667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.907995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.908013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.908317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.908332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.908658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.908671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.908869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.908883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.909086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.909101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.909432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.909446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.909728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.909741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.910087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.910101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.910422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.910437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.910781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.910794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.387 qpair failed and we were unable to recover it. 00:41:22.387 [2024-10-07 14:51:45.911103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.387 [2024-10-07 14:51:45.911118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.911432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.911445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.911761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.911775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.911969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.911983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.912328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.912342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.912627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.912642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.912910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.912923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.913141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.913155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.913512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.913528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.913854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.913868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.914208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.914222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.914507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.914522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.914855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.914868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.915176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.915191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.915586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.915599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.915880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.915901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.916184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.916197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.916436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.916449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.916761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.916774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.917061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.917075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.917384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.917398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.917710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.917724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.918008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.918022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.918339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.918354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.918671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.918684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.919017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.919030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.919416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.919430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.919752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.919766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.920076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.920090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.920377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.920391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.920698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.920712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.921045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.921059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.921409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.921422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.921754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.921775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.922100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.922114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.923095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.923125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.923465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.923479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.923799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.923813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.924154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.924168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.388 [2024-10-07 14:51:45.924479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.388 [2024-10-07 14:51:45.924493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.388 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.924802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.924815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.925139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.925153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.925486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.925500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.925890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.925904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.926714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.926742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.927073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.927089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.927413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.927427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.927710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.927723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.928041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.928058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.928371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.928385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.928700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.928714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.929006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.929021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.930121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.930152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.930478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.930494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.931400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.931428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.931759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.931775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.932102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.932116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.932326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.932342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.932565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.932578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.932889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.932903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.933218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.933232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.933430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.933445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.933766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.933780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.934116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.934130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.934510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.934523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.934857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.934871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.935174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.935189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.935475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.935489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.935813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.935826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.936158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.936180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.936511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.936524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.936889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.936903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.937217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.937231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.937574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.937588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.937911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.937925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.938250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.938273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.938597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.938611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.938943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.938957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.389 [2024-10-07 14:51:45.939355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.389 [2024-10-07 14:51:45.939368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.389 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.939649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.939662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.939988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.940006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.940319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.940334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.940597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.940611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.940908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.940922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.941255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.941270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.941483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.941497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.941806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.941819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.942095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.942108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.942425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.942442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.942772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.942787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.943105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.943119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.943440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.943459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.943778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.943792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.944134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.944148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.944476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.944491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.944723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.944736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.945718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.945747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.946068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.946085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.947017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.947044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.947381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.947396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.947634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.947647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.947947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.947960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.948262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.948276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.948585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.948599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.948919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.948933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.949108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.949122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.949465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.949478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.949821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.949837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.950157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.950171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.950488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.950506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.950855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.950869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.951172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.951186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.951503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.390 [2024-10-07 14:51:45.951517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.390 qpair failed and we were unable to recover it. 00:41:22.390 [2024-10-07 14:51:45.951806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.951819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.952177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.952191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.952410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.952424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.952740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.952754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.953071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.953085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.953400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.953413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.953684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.953697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.953979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.953992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.954330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.954344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.954529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.954545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.954904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.954917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.955197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.955212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.955559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.955574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.955871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.955885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.956119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.956132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.956466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.956482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.956814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.956827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.957172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.957186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.957531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.957546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.957759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.957773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.957982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.957996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.958387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.958401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.958780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.958794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.959168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.959183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.959350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.959364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.959608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.959623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.959990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.960007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.960261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.960275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.960562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.960576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.960925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.960939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.961327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.961341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.961649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.961663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.962022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.962036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.962386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.962400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.962765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.962779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.963096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.963112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.963398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.963412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.963712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.963726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.964033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.964047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.964311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.391 [2024-10-07 14:51:45.964325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.391 qpair failed and we were unable to recover it. 00:41:22.391 [2024-10-07 14:51:45.964655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.964669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.964975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.964989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.965226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.965239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.965561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.965574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.965867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.965881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.966221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.966235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.966532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.966546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.966864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.966877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.967171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.967185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.967551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.967564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.967754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.967767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.968079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.968092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.968384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.968397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.968732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.968745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.969115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.969129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.969434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.969449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.969748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.969761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.969867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.969880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.970204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.970218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.970537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.970551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.970884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.970898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.971223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.971238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.971391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.971405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.971722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.971736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.972078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.972092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.972411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.972424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.972743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.972755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.973091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.973105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.973313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.973328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.973549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.973563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.974005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.974019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.974338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.974352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.974615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.974628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.974924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.974943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.975295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.975309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.975633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.975647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.975963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.975976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.976362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.976376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.976683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.976696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.392 [2024-10-07 14:51:45.976823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.392 [2024-10-07 14:51:45.976836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.392 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.977215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.977228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.977518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.977539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.977892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.977905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.978314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.978328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.978614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.978628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.978982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.978995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.979214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.979228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.979541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.979554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.979868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.979882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.980224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.980239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.980560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.980573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.980785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.980798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.981034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.981048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.981354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.981367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.981690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.981704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.982032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.982048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.982347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.982361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.982542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.982556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.982869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.982883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.983222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.983236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.983563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.983577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.983910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.983923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.984176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.984190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.984433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.984446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.984776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.984790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.985130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.985144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.985487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.985501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.985803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.985817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.986088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.986102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.986335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.986348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.986654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.986667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.986916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.986930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.987254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.987268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.987561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.987575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.987977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.987990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.988362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.988375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.988702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.988716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.988939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.988953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.989261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.393 [2024-10-07 14:51:45.989275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.393 qpair failed and we were unable to recover it. 00:41:22.393 [2024-10-07 14:51:45.989490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.989503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.989715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.989728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.990095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.990110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.990338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.990353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.990570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.990583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.990905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.990918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.991287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.991300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.991624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.991637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.991827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.991841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.992073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.992088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.992453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.992467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.992681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.992696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.992985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.992999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.993316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.993331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.993636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.993650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.994017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.994032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.994360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.994375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.994683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.994697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.994968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.994981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.995414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.995428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.995709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.995723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.995958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.995971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.996338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.996352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.996671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.996685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.996979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.996994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.997327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.997341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.997664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.997678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.998005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.998019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.998325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.998338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.998644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.998657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.998894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.998909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.999139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.999154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.999368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.999382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.394 [2024-10-07 14:51:45.999691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.394 [2024-10-07 14:51:45.999705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.394 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:45.999912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:45.999925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.000292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.000307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.000628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.000641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.000966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.000979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.001325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.001339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.001659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.001679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.001995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.002012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.002199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.002213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.002425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.002439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.002691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.002705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.003006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.003020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.003333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.003347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.003674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.003688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.004053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.004068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.004378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.004391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.004706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.004719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.004950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.004963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.005295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.005309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.005637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.005651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.005964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.005976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.006192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.006205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.006502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.006515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.006839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.006855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.007072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.007086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.007462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.007476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.007856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.007869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.008198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.008213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.008534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.008548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.008731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.008746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.009026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.009041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.009352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.009365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.009697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.009711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.010036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.010050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.010373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.010387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.010711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.010725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.011016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.011037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.011415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.011428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.395 qpair failed and we were unable to recover it. 00:41:22.395 [2024-10-07 14:51:46.011718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.395 [2024-10-07 14:51:46.011732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.012047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.012061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.012463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.012476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.012695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.012709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.012990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.013007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.013306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.013321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.013633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.013647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.013974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.013988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.014211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.014226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.014424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.014438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.014751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.014766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.015081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.015099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.015424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.015439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.015767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.015781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.016127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.016142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.016450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.016464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.016795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.016809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.017148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.017163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.017493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.017507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.017732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.017746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.018061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.018075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.018450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.018464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.018755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.018768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.019086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.019100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.019379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.019394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.019779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.019795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.019993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.020014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.020331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.020346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.020682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.020696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.020884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.020899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.021205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.021220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.021561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.021576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.021893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.021907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.022090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.022104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.022419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.022434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.022763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.022777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.023097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.023112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.023439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.023453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.023782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.396 [2024-10-07 14:51:46.023797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.396 qpair failed and we were unable to recover it. 00:41:22.396 [2024-10-07 14:51:46.024113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.024128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.024311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.024327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.024667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.024682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.024876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.024890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.025210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.025225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.025549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.025563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.025890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.025904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.026228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.026243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.026574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.026589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.026793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.026807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.027152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.027166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.027478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.027492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.027786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.027800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.028143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.028158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.028453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.028468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.028794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.028808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.029171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.029185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.029519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.029532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.029933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.029946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.030121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.030136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.030440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.030456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.030773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.030787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.031111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.031131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.031442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.031456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.031739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.031752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.032067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.032081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.032484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.032500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.032792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.032806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.033118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.033133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.033439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.033452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.033768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.033781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.034061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.034074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.034393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.034406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.034729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.034743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.035064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.035080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.035405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.035419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.036194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.036222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.036559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.036574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.036906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.036920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.397 [2024-10-07 14:51:46.037250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.397 [2024-10-07 14:51:46.037264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.397 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.037452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.037467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.037801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.037815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.038030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.038044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.038379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.038393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.038725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.038739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.038936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.038950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.039306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.039319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.039613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.039627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.039962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.039976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.040290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.040304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.040614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.040627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.040940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.040953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.041247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.041261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.041573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.041587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.041919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.041933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.042310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.042324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.042660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.042675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.042994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.043012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.043223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.043237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.043561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.043576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.043906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.043919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.044324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.044338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.044669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.044682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.044992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.045013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.045344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.045357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.045642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.045655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.045983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.045998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.046374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.046388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.046708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.046721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.047058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.047072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.047384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.047397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.047703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.047717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.048047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.048062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.048391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.048404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.048598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.048613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.048940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.048953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.049258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.049273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.049590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.049604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.398 [2024-10-07 14:51:46.049812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.398 [2024-10-07 14:51:46.049826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.398 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.050136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.050149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.050482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.050496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.050826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.050839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.051128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.051142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.051491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.051504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.051715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.051728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.051927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.051942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.052281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.052295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.052673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.052687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.052983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.052997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.053326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.053340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.053672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.053685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.054051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.054065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.054381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.054394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.054734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.054748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.055111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.055126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.055469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.055482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.055810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.055824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.056152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.056166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.056394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.056407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.056728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.056744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.057082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.057096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.057411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.057425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.057737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.057750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.058062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.058076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.058416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.058430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.058733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.058746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.059016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.059032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.059362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.059376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.059754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.059768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.060045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.060059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.060329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.060342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.060666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.060679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.399 qpair failed and we were unable to recover it. 00:41:22.399 [2024-10-07 14:51:46.061012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.399 [2024-10-07 14:51:46.061026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.061347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.061362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.061697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.061710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.062038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.062052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.062366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.062379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.062708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.062722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.063042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.063057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.063365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.063379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.063712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.063726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.064127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.064141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.064457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.064470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.064857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.064870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.065158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.065172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.065480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.065494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.065772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.065792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.066012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.066026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.066385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.066398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.066727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.066740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.066925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.066938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.067248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.067262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.067574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.067588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.067914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.067928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.068238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.068253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.068562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.068575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.068909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.068923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.069229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.069243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.069570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.069584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.069908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.069922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.070771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.070798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.071130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.071145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.071443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.071456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.071786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.071799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.072117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.072131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.400 [2024-10-07 14:51:46.072448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.400 [2024-10-07 14:51:46.072462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.400 qpair failed and we were unable to recover it. 00:41:22.676 [2024-10-07 14:51:46.072779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.676 [2024-10-07 14:51:46.072793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.676 qpair failed and we were unable to recover it. 00:41:22.676 [2024-10-07 14:51:46.073118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.676 [2024-10-07 14:51:46.073135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.676 qpair failed and we were unable to recover it. 00:41:22.676 [2024-10-07 14:51:46.074112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.676 [2024-10-07 14:51:46.074142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.676 qpair failed and we were unable to recover it. 00:41:22.676 [2024-10-07 14:51:46.074463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.676 [2024-10-07 14:51:46.074479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.676 qpair failed and we were unable to recover it. 00:41:22.676 [2024-10-07 14:51:46.074816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.676 [2024-10-07 14:51:46.074829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.676 qpair failed and we were unable to recover it. 00:41:22.676 [2024-10-07 14:51:46.075637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.676 [2024-10-07 14:51:46.075665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.676 qpair failed and we were unable to recover it. 00:41:22.676 [2024-10-07 14:51:46.075912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.676 [2024-10-07 14:51:46.075927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.676 qpair failed and we were unable to recover it. 00:41:22.676 [2024-10-07 14:51:46.076155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.676 [2024-10-07 14:51:46.076170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.676 qpair failed and we were unable to recover it. 00:41:22.676 [2024-10-07 14:51:46.076461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.076475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.076787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.076800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.077135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.077149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.077462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.077475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.077785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.077800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.078112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.078126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.078462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.078476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.078792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.078805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.079039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.079052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.079341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.079354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.079664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.079679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.079880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.079893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.080218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.080232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.080531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.080545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.080832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.080846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.081163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.081177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.081500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.081514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.081825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.081838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.082102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.082117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.082429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.082445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.082749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.082762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.083074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.083088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.083479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.083492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.083821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.083834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.084146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.084159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.084454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.084468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.084803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.084816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.085116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.677 [2024-10-07 14:51:46.085130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.677 qpair failed and we were unable to recover it. 00:41:22.677 [2024-10-07 14:51:46.085451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.085464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.085787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.085801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.086134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.086149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.086462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.086476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.086777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.086791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.087106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.087120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.087415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.087430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.087752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.087765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.088085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.088099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.088312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.088325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.088611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.088625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.088912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.088925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.089211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.089225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.089397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.089412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.089710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.089725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.090036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.090050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.090334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.090353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.090668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.090681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.091057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.091072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.091406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.091419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.091627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.091641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.091925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.091939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.092311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.092325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.092608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.092621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.092738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.092751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.092951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e700 is same with the state(6) to be set 00:41:22.678 [2024-10-07 14:51:46.093581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.093690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003c0080 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.094030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.094087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003c0080 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.678 [2024-10-07 14:51:46.094478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.678 [2024-10-07 14:51:46.094522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003c0080 with addr=10.0.0.2, port=4420 00:41:22.678 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.094833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.094852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.095171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.095186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.095480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.095495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.095611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.095626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.096043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.096143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003c0080 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.096576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.096623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003c0080 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.097016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.097059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003c0080 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.097485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.097500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.097833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.097848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.098166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.098180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.098382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.098395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.098599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.098612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.098731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.098744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.099008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.099021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.099331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.099344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.099683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.099696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.100008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.100022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.100280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.100293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.100639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.100653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.100871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.100884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.101096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.101110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.101419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.101433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.101729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.101742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.102075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.102091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.102420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.102433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.102761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.102775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.103066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.103087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.103439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.679 [2024-10-07 14:51:46.103453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.679 qpair failed and we were unable to recover it. 00:41:22.679 [2024-10-07 14:51:46.103733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.103747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.103951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.103968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.104267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.104282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.104599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.104613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.104953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.104967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.105176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.105191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.105503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.105517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.105808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.105822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.106026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.106041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.106387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.106400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.106689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.106709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.107043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.107057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.107402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.107424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.107718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.107731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.108051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.108064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.108369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.108383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.108719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.108733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.109047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.109061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.109462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.109476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.109780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.109794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.110083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.110097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.110436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.110450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.110622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.110635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.110967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.110982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.111297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.111312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.111658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.111672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.111985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.112009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.112334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.112347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.112565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.112579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.112811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.112825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.113156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.113170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.113457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.113472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.113803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.113817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.114024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.114038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.114323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.114336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.114631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.114644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.114841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.680 [2024-10-07 14:51:46.114854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.680 qpair failed and we were unable to recover it. 00:41:22.680 [2024-10-07 14:51:46.115169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.115183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.115486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.115499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.115834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.115849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.116080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.116094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.116267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.116283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.116597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.116611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.116928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.116941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.117282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.117295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.117618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.117631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.117988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.118012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.118330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.118343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.118664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.118677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.119015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.119029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.119400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.119413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.119728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.119741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.120041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.120054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.120270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.120283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.120599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.120614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.120956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.120970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.121333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.121347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.121672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.121686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.122023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.122037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.122255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.122268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.122498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.122511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.122707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.122722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.123063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.123077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.123382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.123403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.123723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.123736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.124046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.124059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.124397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.124411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.124723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.124737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.125049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.125063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.125379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.125392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.125713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.125726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.126042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.126055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.126437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.126450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.126734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.126748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.127057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.127071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.681 [2024-10-07 14:51:46.127383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.681 [2024-10-07 14:51:46.127398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.681 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.127625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.127638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.127953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.127967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.128172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.128187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.128541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.128554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.128774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.128787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.129192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.129208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.129401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.129415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.129725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.129739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.129954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.129968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.130267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.130280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.130592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.130607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.130942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.130956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.131283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.131298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.131633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.131648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.131866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.131880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.132197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.132211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.132520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.132533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.132763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.132776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.133061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.133075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.133399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.133413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.133699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.133722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.133922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.133935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.134240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.134254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.134557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.134571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.134887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.134900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.135214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.135228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.135566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.135580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.135890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.135903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.136261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.136275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.136581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.136594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.136784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.136798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.137135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.682 [2024-10-07 14:51:46.137149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.682 qpair failed and we were unable to recover it. 00:41:22.682 [2024-10-07 14:51:46.137343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.137358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.137638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.137651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.137984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.137998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.138320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.138334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.138649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.138663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.138972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.138985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.139209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.139223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.139534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.139547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.139850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.139864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.140186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.140199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.140492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.140506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.140714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.140727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.141084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.141098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.141414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.141429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.141744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.141758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.142087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.142101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.142415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.142429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.142740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.142754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.143090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.143104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.143416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.143429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.143713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.143726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.144074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.144088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.144380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.144395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.144720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.144733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.145067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.145082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.145388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.145402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.145720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.145734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.146065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.146079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.146392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.146405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.146735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.146749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.147083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.147096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.147409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.147423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.147750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.147763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.148141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.148155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.148475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.148488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.148695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.148709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.149040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.149055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.149449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.149463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.149776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.149790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.150101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.683 [2024-10-07 14:51:46.150115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.683 qpair failed and we were unable to recover it. 00:41:22.683 [2024-10-07 14:51:46.150446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.150460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.150787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.150801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.151088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.151101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.151425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.151438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.151658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.151673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.151999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.152016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.152336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.152349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.152667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.152681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.152906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.152919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.153247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.153261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.153583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.153597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.153893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.153906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.154223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.154238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.154549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.154565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.154850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.154864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.155170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.155183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.155502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.155515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.155851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.155865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.156165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.156178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.156474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.156488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.156828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.156841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.157037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.157052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.157369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.157382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.157668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.157681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.158019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.158033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.158299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.158313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.158612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.158626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.158940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.158953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.159268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.159283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.159615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.159629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.159950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.159964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.160251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.160270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.160592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.160606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.160921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.160942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.161248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.161263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.161590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.161604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.161924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.161937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.162330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.162343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.684 [2024-10-07 14:51:46.162639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.684 [2024-10-07 14:51:46.162654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.684 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.162971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.162985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.163217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.163231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.163527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.163541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.163857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.163870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.164731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.164759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.165108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.165123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.165458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.165472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.165786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.165800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.166136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.166150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.166446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.166460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.166771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.166785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.167115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.167129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.167459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.167473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.167625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.167639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.167975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.167988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.168328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.168342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.168656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.168669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.169011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.169026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.169243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.169258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.169581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.169594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.169928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.169947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.170248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.170262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.170653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.170666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.170976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.170989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.171328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.171343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.171637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.171658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.171956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.171970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.172190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.172204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.172522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.172536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.172866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.172880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.173050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.173065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.173421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.173435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.173719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.173734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.174134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.174149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.174459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.174474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.174808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.174822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.175043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.175057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.175395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.175408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.175723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.685 [2024-10-07 14:51:46.175737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.685 qpair failed and we were unable to recover it. 00:41:22.685 [2024-10-07 14:51:46.176012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.176026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.176397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.176411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.176725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.176740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.177074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.177088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.177399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.177412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.177713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.177727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.177941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.177955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.178280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.178294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.178630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.178644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.178964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.178979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.179316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.179331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.179600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.179614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.179937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.179951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.180277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.180292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.180503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.180517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.180837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.180851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.181177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.181192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.181497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.181510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.181822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.181836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.182137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.182152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.182369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.182383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.182667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.182681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.182940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.182953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.183272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.183286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.183599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.183613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.183942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.183955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.184313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.184329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.184523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.184539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.184827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.184840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.185019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.185036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.185323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.185337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.185626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.185639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.185953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.185966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.186272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.186286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.186603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.686 [2024-10-07 14:51:46.186617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.686 qpair failed and we were unable to recover it. 00:41:22.686 [2024-10-07 14:51:46.186814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.186829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.187158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.187172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.187473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.187487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.187773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.187788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.188076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.188090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.188429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.188443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.188764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.188778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.189162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.189178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.189510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.189524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.189739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.189753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.190035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.190049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.190381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.190395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.190747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.190760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.191104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.191117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.191440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.191453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.191760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.191773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.192092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.192107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.192426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.192440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.192652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.192665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.193008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.193023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.193331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.193345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.193674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.193688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.194007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.194021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.194359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.194372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.194592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.194606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.194957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.194970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.195236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.195251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.195451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.195465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.195782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.195796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.196120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.196135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.196423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.196437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.196753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.196767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.197064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.197078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.197380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.197393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.197713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.197733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.197951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.197965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.198332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.198347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.198707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.198721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.198933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.198946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.687 qpair failed and we were unable to recover it. 00:41:22.687 [2024-10-07 14:51:46.199313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.687 [2024-10-07 14:51:46.199327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.199647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.199661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.199991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.200012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.200357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.200371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.200767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.200780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.201094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.201108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.201288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.201308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.201591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.201605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.201912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.201928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.202314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.202328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.202660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.202674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.202891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.202904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.203238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.203252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.203441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.203454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.203663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.203677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.203968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.203981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.204369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.204383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.204713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.204727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.205058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.205072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.205376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.205390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.205658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.205671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.205973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.205987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.206301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.206316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.206615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.206629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.206953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.206966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.207170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.207186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.207530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.207544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.207833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.207847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.208179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.208193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.208391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.208405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.208494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.208509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.208858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.208872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.209096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.209109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.209435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.209448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.209742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.209755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.210075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.210089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.210424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.210438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.210808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.210821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.211078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.211092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.688 qpair failed and we were unable to recover it. 00:41:22.688 [2024-10-07 14:51:46.211416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.688 [2024-10-07 14:51:46.211429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.211647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.211661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.211913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.211927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.212232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.212247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.212443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.212457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.212766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.212779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.213113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.213128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.213468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.213482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.213748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.213762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.213960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.213977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.214330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.214344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.214678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.214692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.215010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.215025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.215333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.215346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.215661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.215674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.215950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.215964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.216271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.216285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.216501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.216514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.216802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.216816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.216994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.217013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.217237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.217251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.217539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.217552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.217866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.217880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.218197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.218211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.218529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.218543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.218835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.218849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.219179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.219194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.219470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.219483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.219866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.219880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.220184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.220199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.220492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.220506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.220713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.220727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.220944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.220958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.221282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.221296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.221682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.221696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.222114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.222128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.222426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.222440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.222735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.222748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.223130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.689 [2024-10-07 14:51:46.223144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.689 qpair failed and we were unable to recover it. 00:41:22.689 [2024-10-07 14:51:46.223482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.223496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.223825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.223838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.224162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.224176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.224484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.224497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.224836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.224850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.225246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.225260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.225577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.225591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.225763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.225778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.226062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.226077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.226300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.226314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.226515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.226532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.226894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.226908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.227230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.227244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.227550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.227563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.227795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.227808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.228119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.228133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.228452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.228465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.228780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.228793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.229208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.229222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.229542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.229556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.229876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.229889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.230195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.230215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.230513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.230527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.230849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.230863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.231184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.231198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.231411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.231424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.231783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.231796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.232186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.232200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.232428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.232441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.232833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.232846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.233046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.233060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.233423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.233436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.233726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.233739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.234053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.234067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.234370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.234384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.690 [2024-10-07 14:51:46.234707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.690 [2024-10-07 14:51:46.234720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.690 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.235036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.235050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.235464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.235477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.235798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.235811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.236125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.236139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.236476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.236490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.236790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.236803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.237105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.237119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.237327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.237341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.237631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.237645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.237956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.237969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.238282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.238295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.238708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.238722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.239069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.239083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.239432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.239445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.239773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.239789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.240099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.240113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.240420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.240433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.240768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.240782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.241114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.241128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.241420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.241434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.241748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.241761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.242097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.242112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.242429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.242442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.242726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.242740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.242935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.242948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.243493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.243604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003c0080 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.243994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.244064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003c0080 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.244406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.244421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.244763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.244777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.245118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.245132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.245455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.245468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.245809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.245823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.246027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.246041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.246333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.246346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.246549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.246562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.246906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.246919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.247186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.247200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.247490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.691 [2024-10-07 14:51:46.247503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.691 qpair failed and we were unable to recover it. 00:41:22.691 [2024-10-07 14:51:46.247823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.247846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.248169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.248182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.248377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.248391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.248729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.248742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.248953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.248966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.249094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.249107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.249459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.249472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.249684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.249697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.250015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.250029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.250412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.250425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.250635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.250648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.250942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.250955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.251299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.251313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.251644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.251658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.252007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.252021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.252214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.252227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.252575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.252590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.252892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.252906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.253239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.253252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.253458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.253470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.253796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.253809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.254133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.254147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.254469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.254483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.254820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.254835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.255215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.255230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.255432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.255446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.255817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.255830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.256156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.256170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.256496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.256509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.256793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.256807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.257008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.257022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.257319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.257333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.257528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.257541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.257740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.257752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.257972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.257985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.258277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.258290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.258581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.258595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.258787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.258801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.259178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.259193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.259478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.692 [2024-10-07 14:51:46.259492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.692 qpair failed and we were unable to recover it. 00:41:22.692 [2024-10-07 14:51:46.259815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.259828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.259999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.260016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.260351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.260364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.260577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.260592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.260798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.260811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.261141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.261155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.261441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.261455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.261735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.261748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.262124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.262138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.262466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.262479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.262765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.262778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.262967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.262980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.263323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.263338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.263518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.263533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.263960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.263974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.264188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.264202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.264557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.264574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.264904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.264918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.265195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.265209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.265422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.265436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.265710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.265724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.265929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.265942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.266282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.266296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.266530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.266544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.266870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.266884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.267301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.267315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.267690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.267704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.268045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.268058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.268314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.268327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.268542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.268555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.268895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.268910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.269264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.269279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.269582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.269595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.269908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.269921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.270248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.270261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.270472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.270485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.270716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.270729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.271063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.271077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.271323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.271336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.693 qpair failed and we were unable to recover it. 00:41:22.693 [2024-10-07 14:51:46.271502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.693 [2024-10-07 14:51:46.271517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.271837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.271850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.272318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.272331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.272618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.272632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.272860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.272873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.273198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.273212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.273527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.273540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.273836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.273850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.274159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.274173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.274488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.274502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.274785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.274798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.275120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.275134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.275380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.275393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.275730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.275743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.276053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.276067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.276298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.276311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.276630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.276643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.276963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.276979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.277310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.277324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.277611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.277625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.277890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.277903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.278287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.278301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.278589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.278603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.278917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.278930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.279362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.279376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.279732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.279745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.280120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.280133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.280346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.280359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.280794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.280809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.281122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.281136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.281471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.281485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.281797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.281810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.282015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.282029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.282250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.282270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.282598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.282611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.282907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.282920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.283036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.283059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.283358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.283371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.283566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.283579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.694 [2024-10-07 14:51:46.283798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.694 [2024-10-07 14:51:46.283819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.694 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.284117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.284132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.284422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.284436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.284639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.284652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.284854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.284867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.285124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.285139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.285484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.285497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.285786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.285800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.286121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.286134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.286421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.286442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.286755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.286768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.287076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.287090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.287446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.287459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.287652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.287665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.287987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.288004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.288393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.288406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.288675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.288688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.288859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.288872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.289081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.289097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.289428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.289441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.289720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.289733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.289924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.289937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.290298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.290316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.290625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.290638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.290956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.290970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.291340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.291353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.291647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.291661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.291975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.291989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.292231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.292245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.292463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.292476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.292805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.292819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.293051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.293065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.293381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.293394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.293727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.293741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.294130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.695 [2024-10-07 14:51:46.294143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.695 qpair failed and we were unable to recover it. 00:41:22.695 [2024-10-07 14:51:46.294421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.294434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.294764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.294777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.295080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.295093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.295397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.295411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.295771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.295783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.296094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.296108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.296432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.296445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.296724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.296737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.296962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.296975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.297200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.297214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.297491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.297505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.297708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.297721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.298019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.298032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.298358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.298371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.298694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.298707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.299028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.299042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.299351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.299364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.299581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.299594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.299917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.299931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.300234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.300248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.300464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.300477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.300710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.300723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.301052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.301065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.301387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.301402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.301740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.301754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.302029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.302043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.302287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.302299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.302591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.302605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.302789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.302803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.303099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.303113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.303410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.303424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.303742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.303755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.303929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.303943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.304248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.304262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.304580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.304593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.304903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.304917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.305180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.305194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.305511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.305524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.305819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.305832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.696 [2024-10-07 14:51:46.306071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.696 [2024-10-07 14:51:46.306085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.696 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.306399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.306412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.306717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.306730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.307012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.307026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.307308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.307321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.307625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.307638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.307841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.307855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.308150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.308164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.308384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.308397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.308756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.308769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.309072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.309086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.309403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.309416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.309732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.309746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.310057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.310071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.310407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.310421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.310730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.310744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.311061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.311075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.311405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.311419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.311611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.311625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.311920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.311933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.312292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.312305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.312588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.312602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.312912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.312926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.313285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.313299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.313632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.313648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.313863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.313876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.314239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.314253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.314573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.314586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.314862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.314876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.315226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.315239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.315451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.315464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.315787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.315801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.316120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.316134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.316323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.316337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.316541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.316554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.316743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.316757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.316976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.316990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.317284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.317297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.317651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.317665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.697 [2024-10-07 14:51:46.317983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.697 [2024-10-07 14:51:46.317997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.697 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.318281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.318294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.318608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.318621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.318935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.318949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.319235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.319250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.319470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.319484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.319622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.319637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.319956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.319969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.320184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.320197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.320506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.320519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.320837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.320850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.321168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.321182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.321473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.321487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.321707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.321721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.322058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.322072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.322305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.322318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.322640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.322654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.322957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.322970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.323290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.323304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.323614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.323628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.323955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.323969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.324359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.324373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.324653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.324667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.325019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.325033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.325372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.325385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.325702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.325717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.326079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.326093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.326368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.326382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.326713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.326726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.327053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.327067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.327394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.327407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.327735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.327748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.327940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.327955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.328274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.328287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.328592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.328606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.328938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.328952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.329239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.329255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.329571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.329584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.329869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.329882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.698 [2024-10-07 14:51:46.330175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.698 [2024-10-07 14:51:46.330189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.698 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.330382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.330396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.330896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.331018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003c0080 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.331479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.331529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003c0080 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.331896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.331911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.332224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.332237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.332611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.332624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.332940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.332954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.333246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.333260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.333575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.333590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.333811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.333825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.334064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.334078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.334393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.334407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.334723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.334741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.335067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.335081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.335458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.335472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.335760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.335773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.336115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.336128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.336439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.336452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.336791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.336804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.337099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.337113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.337211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.337225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.337513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.337527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.337831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.337845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.338145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.338159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.338480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.338493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.338822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.338835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.339121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.339135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.339446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.339459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.339794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.339809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.340036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.340050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.340344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.340359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.340708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.340721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.341037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.699 [2024-10-07 14:51:46.341052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.699 qpair failed and we were unable to recover it. 00:41:22.699 [2024-10-07 14:51:46.341373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.341387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.341766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.341780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.342103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.342117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.342429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.342442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.342780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.342793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.343108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.343122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.343430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.343444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.343790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.343803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.344131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.344145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.344455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.344469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.344814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.344827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.345152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.345166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.345483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.345496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.345829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.345843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.346233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.346247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.346517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.346531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.346879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.346892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.347193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.347207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.347521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.347535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.347866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.347882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.348308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.348322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.348619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.348633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.348965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.348978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.349302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.349316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.349606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.349625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.349913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.349926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.350020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.350035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.350333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.350346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.350636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.350650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.350989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.351007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.351304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.351318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.351633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.351646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.351976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.351989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.352295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.352309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.352626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.352640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.352966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.352979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.353346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.353361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.353680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.353693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.700 [2024-10-07 14:51:46.353979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.700 [2024-10-07 14:51:46.353993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.700 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.354208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.354222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.354539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.354552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.354856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.354869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.355170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.355184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.355565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.355579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.355870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.355890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.356223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.356236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.356536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.356549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.356885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.356898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.357192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.357206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.357515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.357529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.357859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.357873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.358191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.358204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.358494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.358509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.358817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.358830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.359134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.359148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.359477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.359490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.359828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.359842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.360159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.360173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.360568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.360581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.360875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.360890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.361219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.361233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.361519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.361534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.361865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.361879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.362216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.362230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.362544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.362558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.362887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.362900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.363227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.363242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.363553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.363566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.363901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.363915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.364252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.364266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.364592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.364606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.364903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.364916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.365236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.365250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.365567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.365580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.365911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.365925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.366218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.366231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.366549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.366563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.701 [2024-10-07 14:51:46.366900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.701 [2024-10-07 14:51:46.366912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.701 qpair failed and we were unable to recover it. 00:41:22.702 [2024-10-07 14:51:46.367994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.702 [2024-10-07 14:51:46.368030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.702 qpair failed and we were unable to recover it. 00:41:22.702 [2024-10-07 14:51:46.368348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.702 [2024-10-07 14:51:46.368362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.702 qpair failed and we were unable to recover it. 00:41:22.702 [2024-10-07 14:51:46.368652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.702 [2024-10-07 14:51:46.368666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.702 qpair failed and we were unable to recover it. 00:41:22.702 [2024-10-07 14:51:46.368872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.702 [2024-10-07 14:51:46.368887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.702 qpair failed and we were unable to recover it. 00:41:22.702 [2024-10-07 14:51:46.369239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.702 [2024-10-07 14:51:46.369253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.702 qpair failed and we were unable to recover it. 00:41:22.702 [2024-10-07 14:51:46.369542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.702 [2024-10-07 14:51:46.369555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.702 qpair failed and we were unable to recover it. 00:41:22.702 [2024-10-07 14:51:46.369858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.702 [2024-10-07 14:51:46.369871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.702 qpair failed and we were unable to recover it. 00:41:22.702 [2024-10-07 14:51:46.370068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.702 [2024-10-07 14:51:46.370082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.702 qpair failed and we were unable to recover it. 00:41:22.702 [2024-10-07 14:51:46.370272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.702 [2024-10-07 14:51:46.370286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.702 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.370567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.370582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.370793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.370807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.371159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.371174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.371467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.371480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.371807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.371820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.372119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.372133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.372462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.372477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.372687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.372701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.372942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.372955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.373241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.373256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.373529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.373543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.373840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.373853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.374174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.374190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.374515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.374529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.374871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.374884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.375307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.375321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.375557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.375570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.375908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.375922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.376173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.376188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.376494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.376508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.376799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.376812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.377037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.377051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.377375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.377387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.377710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.377724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.378090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.378104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.378423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.378436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.977 qpair failed and we were unable to recover it. 00:41:22.977 [2024-10-07 14:51:46.378611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.977 [2024-10-07 14:51:46.378624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.378928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.378942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.379236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.379250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.379429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.379442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.379721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.379741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.379949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.379962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.380272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.380285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.380468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.380482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.380802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.380815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.381013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.381026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.381366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.381380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.381703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.381716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.381928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.381942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.382323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.382337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.382663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.382676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.382955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.382968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.383275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.383290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.383593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.383606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.383917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.383931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.384129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.384144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.384377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.384391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.384695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.384709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.385009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.385023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.385122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.385136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.385483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.385496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.385807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.385821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.386052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.386068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.386258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.386271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.386443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.386458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.386646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.386659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.386844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.386858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.387278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.387292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.387580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.387593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.387921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.387935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.388114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.388128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.388446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.388459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.388754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.388768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.389007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.389021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.389328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.389341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.389581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.389594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.978 [2024-10-07 14:51:46.389874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.978 [2024-10-07 14:51:46.389888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.978 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.390228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.390241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.390521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.390540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.390867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.390880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.391107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.391121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.391335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.391348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.391540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.391554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.391866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.391879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.391973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.391985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.392306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.392320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.392632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.392646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.392977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.392991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.393306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.393319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.393637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.393650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.393940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.393953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.394285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.394300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.394631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.394644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.394975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.394989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.395162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.395178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.395473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.395487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.395809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.395823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.396133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.396147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.396454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.396468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.396777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.396790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.397157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.397171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.397509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.397522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.397808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.397823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.398048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.398062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.398431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.398445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.398654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.398667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.399017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.399031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.399301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.399315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.399654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.399667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.400012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.400025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.400336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.400349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.400569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.400582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.400805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.400818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.401002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.401016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.401210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.401222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.401402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.401415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.979 [2024-10-07 14:51:46.401731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.979 [2024-10-07 14:51:46.401745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.979 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.402060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.402074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.402273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.402286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.402490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.402503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.402839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.402852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.403183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.403197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.403486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.403499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.403863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.403876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.404039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.404054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.404332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.404346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.404660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.404673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.404957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.404971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.405203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.405216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.405547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.405560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.405875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.405888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.406104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.406117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.406504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.406517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.406807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.406821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.407048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.407061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.407382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.407395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.407571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.407585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.407890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.407903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.408199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.408213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.408489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.408502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.408824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.408837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.409175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.409189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.409359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.409377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.409747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.409761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.409969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.409983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.410292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.410306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.410638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.410652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.410986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.411010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.411316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.411330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.411641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.411655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.411967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.411981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.412285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.412299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.412625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.412640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.412969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.412984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.413178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.413193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.413523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.980 [2024-10-07 14:51:46.413538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.980 qpair failed and we were unable to recover it. 00:41:22.980 [2024-10-07 14:51:46.413897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.413912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.414284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.414299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.414625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.414639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.414853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.414867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.415221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.415235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.415564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.415579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.415893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.415907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.416314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.416329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.416652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.416666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.416981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.416995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.417204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.417218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.417438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.417452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.417763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.417777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.418117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.418132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.418451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.418465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.418771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.418786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.419106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.419122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.419444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.419458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.419784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.419798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.420009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.420024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.420367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.420382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.420557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.420571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.420871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.420885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.421193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.421207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.421362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.421376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.421713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.421727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.421908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.421926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.422303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.422317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.422640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.422654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.423013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.423028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.423350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.423364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.981 [2024-10-07 14:51:46.423648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.981 [2024-10-07 14:51:46.423663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.981 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.423993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.424011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.424294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.424309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.424636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.424650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.424987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.425004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.425321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.425335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.425643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.425657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.425980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.425994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.426328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.426342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.426525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.426540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.426867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.426881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.427169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.427183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.427476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.427490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.427813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.427827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.428247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.428262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.428547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.428560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.428871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.428884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.429250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.429264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.429609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.429622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.429905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.429918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.430146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.430160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.430461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.430474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.430768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.430781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.430998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.431016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.431346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.431359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.431692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.431705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.432030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.432044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.432359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.432373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.432582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.432595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.432933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.432947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.433243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.433257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.433573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.433586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.433917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.433931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.434285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.434300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.434618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.434632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.434946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.434962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.435268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.435282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.435599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.435620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.435902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.435916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.436220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.982 [2024-10-07 14:51:46.436234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.982 qpair failed and we were unable to recover it. 00:41:22.982 [2024-10-07 14:51:46.436540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.436553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.436877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.436891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.437211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.437225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.437536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.437555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.437878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.437892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.438225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.438239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.438459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.438473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.438790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.438803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.439153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.439167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.439369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.439383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.439689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.439702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.439890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.439903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.440197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.440211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.440541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.440554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.440846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.440859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.441167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.441180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.441466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.441480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.441791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.441804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.442124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.442138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.442489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.442503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.442679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.442694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.442988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.443014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.443404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.443419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.443747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.443761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.444073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.444087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.444415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.444430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.444761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.444775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.445091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.445105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.445439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.445453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.445823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.445837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.446144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.446158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.446545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.446559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.446870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.446884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.447092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.447107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.447430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.447444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.447731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.447749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.448073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.448087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.448402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.448423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.448775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.448789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.449104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.983 [2024-10-07 14:51:46.449118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.983 qpair failed and we were unable to recover it. 00:41:22.983 [2024-10-07 14:51:46.449428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.449442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.449776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.449790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.450119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.450133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.450447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.450468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.450791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.450804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.451098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.451113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.451316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.451330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.451562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.451575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.451776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.451791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.452083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.452097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.452403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.452416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.452733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.452746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.453029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.453043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.453365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.453378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.453583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.453596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.453864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.453877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.454188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.454202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.454537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.454552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.454833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.454847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.455170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.455184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.455498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.455511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.455824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.455838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.456166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.456180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.456505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.456519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.456846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.456861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.457193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.457207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.457520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.457534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.457853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.457867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.458200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.458215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.458513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.458528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.458838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.458852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.459180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.459195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.459520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.459534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.459904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.459917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.460212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.460227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.460541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.460558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.460882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.460896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.461218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.461233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.461559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.461573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.461901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.984 [2024-10-07 14:51:46.461915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.984 qpair failed and we were unable to recover it. 00:41:22.984 [2024-10-07 14:51:46.462243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.462258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.462588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.462602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.462936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.462951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.463150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.463165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.463481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.463494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.463797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.463810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.464157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.464172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.464487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.464508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.464821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.464835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.465169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.465183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.465504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.465518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.465828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.465842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.466147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.466160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.466475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.466496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.466853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.466866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.467190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.467205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.467521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.467534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.467853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.467873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.468220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.468234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.468546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.468559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.468735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.468750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.469119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.469134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.469473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.469486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.469801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.469814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.470156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.470169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.470481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.470496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.470817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.470830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.471189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.471203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.471564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.471578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.471909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.471922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.472222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.472236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.472449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.472462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.472752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.472765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.473059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.473073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.473393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.473406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.473717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.473734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.474064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.474079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.474375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.474389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.474718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.474732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.475050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.475064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.985 qpair failed and we were unable to recover it. 00:41:22.985 [2024-10-07 14:51:46.475386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.985 [2024-10-07 14:51:46.475400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.475718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.475732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.476014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.476028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.476361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.476374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.476652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.476665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.476998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.477015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.477331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.477344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.477668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.477681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.477971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.477986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.478299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.478314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.478664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.478678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.479052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.479067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.479390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.479404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.479723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.479743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.480069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.480084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.480398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.480412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.480779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.480792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.481166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.481180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.481487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.481501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.481819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.481832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.482120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.482134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.482484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.482498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.482782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.482804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.483125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.483139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.483448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.483461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.483778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.483792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.484096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.484111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.484279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.484295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.484607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.484620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.484930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.484943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.485262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.485276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.485611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.485624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.986 qpair failed and we were unable to recover it. 00:41:22.986 [2024-10-07 14:51:46.485954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.986 [2024-10-07 14:51:46.485969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.486273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.486288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.486477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.486492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.486771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.486788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.487062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.487076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.487405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.487418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.487768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.487781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.488072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.488086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.488416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.488429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.488764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.488777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.489051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.489067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.489270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.489283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.489643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.489657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.489967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.489980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.490201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.490215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.490539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.490553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.490839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.490853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.491169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.491184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.491513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.491527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.491850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.491869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.492172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.492187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.492476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.492491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.492809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.492823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.493097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.493111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.493403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.493416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.493792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.493806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.494032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.494046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.494356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.494369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.494689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.494703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.495065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.495079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.495222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.495237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.495579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.495593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.495917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.495931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.496312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.496326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.496540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.496553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.496741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.496756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.497054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.497069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.497385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.497399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.497764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.497778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.498115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.498129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.987 qpair failed and we were unable to recover it. 00:41:22.987 [2024-10-07 14:51:46.498460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.987 [2024-10-07 14:51:46.498473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.498806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.498819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.499114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.499128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.499333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.499347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.499559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.499572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.499929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.499942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.500125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.500139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.500475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.500488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.500818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.500831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.501135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.501149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.501472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.501485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.501807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.501820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.502140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.502153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.502508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.502521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.502815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.502828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.503018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.503032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.503345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.503358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.503579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.503591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.503909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.503922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.504220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.504234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.504565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.504578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.504888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.504902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.505212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.505226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.505448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.505461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.505772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.505785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.506115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.506128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.506505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.506518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.506846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.506859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.507171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.507185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.507481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.507494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.507826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.507843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.508164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.508178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.508513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.508526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.508855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.508869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.509184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.509198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.509531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.509544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.509849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.509862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.510141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.510154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.510345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.510358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.510691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.510705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.988 [2024-10-07 14:51:46.511029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.988 [2024-10-07 14:51:46.511043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.988 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.511330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.511344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.511657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.511670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.511984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.511998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.512298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.512311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.512637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.512651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.512985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.512999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.513331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.513344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.513658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.513672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.513854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.513868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.514091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.514105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.514377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.514391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.514717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.514730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.515068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.515081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.515432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.515445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.515660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.515673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.515873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.515886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.516197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.516211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.516521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.516534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.516736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.516751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.517047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.517061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.517373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.517386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.517719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.517732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.518051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.518065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.518377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.518390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.518725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.518737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.519054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.519068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.519415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.519428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.519764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.519777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.520128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.520142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.520456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.520472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.520820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.520833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.521145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.521159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.521473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.521486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.521817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.521831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.522158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.522172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.522529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.522542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.522932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.522946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.523266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.523279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.523583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.989 [2024-10-07 14:51:46.523596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.989 qpair failed and we were unable to recover it. 00:41:22.989 [2024-10-07 14:51:46.523793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.523806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.523987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.524005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.524341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.524354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.524687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.524700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.525018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.525032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.525380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.525393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.525703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.525716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.526026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.526040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.526365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.526378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.526722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.526736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.526956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.526970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.527370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.527384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.527577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.527592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.527953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.527966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.528148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.528162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.528484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.528498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.528808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.528821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.529133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.529146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.529485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.529499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.529818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.529831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.530148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.530161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.530488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.530501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.530784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.530798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.531095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.531109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.531396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.531410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.531614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.531627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.531958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.531971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.532294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.532307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.532633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.532647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.532974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.532987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.533326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.533343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.533558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.533571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.533883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.533896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.534260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.534274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.534603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.990 [2024-10-07 14:51:46.534617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.990 qpair failed and we were unable to recover it. 00:41:22.990 [2024-10-07 14:51:46.534921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.534935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.535246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.535261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.535625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.535638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.535845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.535858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.536208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.536223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.536506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.536519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.536891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.536904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.537099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.537114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.537392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.537406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.537723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.537736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.538034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.538048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.538353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.538367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.538697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.538711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.539049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.539062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.539284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.539297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.539682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.539695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.539983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.539996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.540328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.540341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.540662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.540676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.541020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.541033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.541349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.541362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.541679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.541692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.542058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.542072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.542414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.542427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.542756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.542769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.543090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.543104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.543420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.543434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.543749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.543763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.544071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.544085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.544400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.544413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.544743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.544756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.545080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.545094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.545419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.545433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.545757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.545771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.546090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.546104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.546427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.546443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.546752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.546766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.547100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.547114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.547427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.547441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.547755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.547770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.991 qpair failed and we were unable to recover it. 00:41:22.991 [2024-10-07 14:51:46.548093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.991 [2024-10-07 14:51:46.548107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.548429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.548442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.548829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.548842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.549179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.549192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.549523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.549536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.549867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.549880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.550230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.550244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.550565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.550580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.550892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.550905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.551267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.551283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.551601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.551614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.551934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.551948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.552271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.552284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.552600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.552614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.552926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.552940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.553252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.553267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.553598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.553611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.553928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.553942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.554245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.554259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.554573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.554586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.554895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.554908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.555312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.555326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.555655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.555669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.555995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.556013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.556334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.556348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.556673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.556686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.556972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.556985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.557308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.557322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.557530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.557543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.557913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.557926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.558244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.558265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.558592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.558606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.558947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.558961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.559167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.559181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.559527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.559540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.559821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.559839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.560022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.560036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.560277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.560291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.560604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.560618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.560795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.560810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.992 qpair failed and we were unable to recover it. 00:41:22.992 [2024-10-07 14:51:46.561145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.992 [2024-10-07 14:51:46.561159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.561478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.561492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.561827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.561841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.562177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.562190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.562477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.562490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.562682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.562697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.562987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.563003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.563302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.563315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.563647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.563661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.563969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.563982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.564304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.564318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.564546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.564559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.564875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.564888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.565184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.565198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.565525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.565538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.565872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.565885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.566228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.566242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.566574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.566588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.566808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.566822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.567134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.567148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.567492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.567505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.567824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.567838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.568154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.568168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.568505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.568518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.568826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.568839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.569150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.569164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.569491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.569505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.569812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.569826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.570036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.570050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.570374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.570387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.570683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.570697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.571098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.571112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.571420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.571433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.571746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.571759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.572079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.572093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.572463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.572478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.572751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.572764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.573075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.573089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.573427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.573440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.573818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.993 [2024-10-07 14:51:46.573831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.993 qpair failed and we were unable to recover it. 00:41:22.993 [2024-10-07 14:51:46.574148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.574166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.574485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.574498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.574812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.574825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.575151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.575164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.575463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.575477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.575802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.575815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.576134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.576148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.576475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.576488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.576805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.576818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.577076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.577089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.577436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.577450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.577784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.577797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.578124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.578138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.578433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.578446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.578758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.578771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.579147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.579160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.579423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.579436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.579775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.579787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.580006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.580019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.580321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.580334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.580658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.580672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.580993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.581010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.581341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.581356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.581682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.581694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.581998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.582016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.582332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.582345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.582663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.582676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.582983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.582996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.583263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.583276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.583588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.583601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.583925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.583939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.584249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.584263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.584582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.584596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.994 [2024-10-07 14:51:46.584912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.994 [2024-10-07 14:51:46.584925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.994 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.585222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.585237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.585548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.585563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.585905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.585918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.586230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.586244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.586559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.586572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.586870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.586884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.587170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.587185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.587408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.587422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.587759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.587773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.587942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.587956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.588296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.588311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.588634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.588648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.588828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.588843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.589178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.589192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.589507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.589521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.589849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.589863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.590195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.590209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.590548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.590561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.590892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.590905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.591238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.591252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.591650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.591663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.591995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.592018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.592356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.592369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.592681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.592695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.592985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.592998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.593282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.593295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.593527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.593541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.593862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.593874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.594216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.594231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.594546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.594559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.594889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.594902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.595236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.595250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.595565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.595579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.595920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.595933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.596257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.596271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.596652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.596666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.596972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.995 [2024-10-07 14:51:46.596986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.995 qpair failed and we were unable to recover it. 00:41:22.995 [2024-10-07 14:51:46.597356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.597369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.597744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.597758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.598059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.598073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.598406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.598420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.598736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.598752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.599038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.599052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.599408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.599421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.599702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.599716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.600048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.600062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.600286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.600299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.600624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.600637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.600928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.600941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.601248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.601261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.601577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.601590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.601903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.601916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.602232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.602246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.602502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.602515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.602805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.602819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.603135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.603148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.603457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.603470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.603807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.603820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.604099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.604113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.604431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.604444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.604772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.604785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.605101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.605115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.605433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.605446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.605782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.605796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.606126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.606141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.606458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.606471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.606847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.606860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.607191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.607205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.607501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.607515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.607700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.607713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.607920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.607934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.608261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.608274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.608610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.608624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.608937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.608950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.609271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.609285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.609577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.609590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.609901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.996 [2024-10-07 14:51:46.609914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.996 qpair failed and we were unable to recover it. 00:41:22.996 [2024-10-07 14:51:46.610202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.610216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.610564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.610577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.610895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.610908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.611232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.611246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.611532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.611548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.611862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.611875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.612211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.612225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.612450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.612463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.612775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.612790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.613121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.613134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.613421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.613434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.613768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.613781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.614057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.614071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.614388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.614401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.614718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.614732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.615061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.615074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.615458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.615471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.615801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.615817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.616138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.616153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.616458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.616471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.616788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.616801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.617126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.617140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.617472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.617485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.617812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.617825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.618144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.618158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.618459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.618472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.618778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.618791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.619118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.619132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.619419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.619433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.619768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.619782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.620045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.997 [2024-10-07 14:51:46.620059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.997 qpair failed and we were unable to recover it. 00:41:22.997 [2024-10-07 14:51:46.620384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.620398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.620757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.620770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.620915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.620929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.621289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.621302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.621512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.621525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.621870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.621884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.622211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.622225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.622540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.622553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.622732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.622746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.623057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.623070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.623389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.623402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.623730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.623744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.624062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.624075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.624393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.624409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.624712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.624725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.624995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.625012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.625339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.625353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.625664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.625678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.626016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.626030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.626406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.626419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.626730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.998 [2024-10-07 14:51:46.626744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.998 qpair failed and we were unable to recover it. 00:41:22.998 [2024-10-07 14:51:46.627067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.627080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.627398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.627411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.627629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.627642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.627972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.627985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.628308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.628322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.628619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.628632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.628963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.628976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.629297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.629311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.629622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.629635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.629968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.629982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.630281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.630295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.630619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.630632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.630969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.630983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.631297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.631311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.631621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.631635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.631851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.631864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.632200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.632214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.632446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.632459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.632773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.632786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.633181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.633195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.633518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.633531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.633830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.633843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.634068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.634082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.634400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.634413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.634757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.634770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.635082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.635095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.635487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.635500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.635805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.635818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.636034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.636048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.636369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.636382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.636665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.636678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.636890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.636904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.637224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.637240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.637565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.637579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.637907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.637920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.638137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.638151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:22.999 qpair failed and we were unable to recover it. 00:41:22.999 [2024-10-07 14:51:46.638448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:22.999 [2024-10-07 14:51:46.638462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.638792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.638806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.639112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.639125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.639420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.639433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.639813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.639826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.640031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.640044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.640347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.640360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.640677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.640691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.641014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.641028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.641211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.641226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.641429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.641443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.641711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.641724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.642026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.642046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.642355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.642368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.642685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.642699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.643037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.643051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.643350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.643363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.643567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.643581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.643905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.643919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.644241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.644254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.644566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.644580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.644918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.644932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.645222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.645236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.645551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.645565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.645862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.645876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.646215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.646228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.646542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.646556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.646903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.646917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.647254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.647268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.647589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.647603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.647965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.647979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.648292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.648307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.648634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.648648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.648984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.648998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.649319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.649333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.649669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.649683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.650031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.650048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.650389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.650403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.000 qpair failed and we were unable to recover it. 00:41:23.000 [2024-10-07 14:51:46.650728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.000 [2024-10-07 14:51:46.650741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.650961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.650975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.651155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.651168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.651408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.651421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.651789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.651802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.652080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.652094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.652411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.652424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.652610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.652623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.652934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.652947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.653270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.653283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.653612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.653626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.653948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.653962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.654284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.654297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.654602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.654616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.654947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.654961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.655275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.655289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.655472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.655487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.655824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.655837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.656121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.656138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.656433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.656446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.656761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.656775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.657084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.657097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.657441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.657455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.657827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.657840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.658040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.658055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.658378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.658393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.658714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.658727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.659044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.659058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.659345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.659359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.659682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.659695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.659978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.660004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.660355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.660369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.660680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.660693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.661015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.661029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.661337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.661351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.661696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.661710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.661991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.662009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.662214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.662227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.662551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.662564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.662771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.001 [2024-10-07 14:51:46.662785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.001 qpair failed and we were unable to recover it. 00:41:23.001 [2024-10-07 14:51:46.663044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.663058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.663390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.663404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.663710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.663723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.664062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.664081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.664405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.664418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.664736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.664757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.665059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.665073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.665392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.665413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.665712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.665725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.666104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.666119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.666402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.666415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.666696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.666709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.667028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.667043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.667377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.667391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.667702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.667716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.667926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.667940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.668138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.668152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.668532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.668545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.668891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.668905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.669258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.669272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.669555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.669569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.669883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.669896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.670227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.670247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.670564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.670577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.670908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.670930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.671288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.671305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.002 [2024-10-07 14:51:46.671587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.002 [2024-10-07 14:51:46.671608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.002 qpair failed and we were unable to recover it. 00:41:23.284 [2024-10-07 14:51:46.671946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.284 [2024-10-07 14:51:46.671960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.284 qpair failed and we were unable to recover it. 00:41:23.284 [2024-10-07 14:51:46.672359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.284 [2024-10-07 14:51:46.672375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.284 qpair failed and we were unable to recover it. 00:41:23.284 [2024-10-07 14:51:46.672698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.284 [2024-10-07 14:51:46.672711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.284 qpair failed and we were unable to recover it. 00:41:23.284 [2024-10-07 14:51:46.673009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.284 [2024-10-07 14:51:46.673023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.284 qpair failed and we were unable to recover it. 00:41:23.284 [2024-10-07 14:51:46.673361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.284 [2024-10-07 14:51:46.673374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.284 qpair failed and we were unable to recover it. 00:41:23.284 [2024-10-07 14:51:46.673690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.284 [2024-10-07 14:51:46.673704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.284 qpair failed and we were unable to recover it. 00:41:23.284 [2024-10-07 14:51:46.674081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.284 [2024-10-07 14:51:46.674094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.284 qpair failed and we were unable to recover it. 00:41:23.284 [2024-10-07 14:51:46.674370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.284 [2024-10-07 14:51:46.674384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.284 qpair failed and we were unable to recover it. 00:41:23.284 [2024-10-07 14:51:46.674696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.284 [2024-10-07 14:51:46.674709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.284 qpair failed and we were unable to recover it. 00:41:23.284 [2024-10-07 14:51:46.675034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.284 [2024-10-07 14:51:46.675050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.284 qpair failed and we were unable to recover it. 00:41:23.284 [2024-10-07 14:51:46.675337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.284 [2024-10-07 14:51:46.675351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.284 qpair failed and we were unable to recover it. 00:41:23.284 [2024-10-07 14:51:46.675633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.284 [2024-10-07 14:51:46.675646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.284 qpair failed and we were unable to recover it. 00:41:23.284 [2024-10-07 14:51:46.675988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.284 [2024-10-07 14:51:46.676005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.284 qpair failed and we were unable to recover it. 00:41:23.284 [2024-10-07 14:51:46.676331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.284 [2024-10-07 14:51:46.676344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.284 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.676667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.676687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.676902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.676915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.677093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.677108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.677335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.677349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.677693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.677707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.677995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.678012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.678328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.678342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.678624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.678644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.678969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.678982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.679309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.679323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.679636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.679649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.680019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.680035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.680345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.680358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.680658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.680673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.680993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.681010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.681312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.681326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.681624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.681637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.681966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.681979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.682297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.682311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.682500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.682516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.682836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.682850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.683168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.683182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.683472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.683485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.683817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.683831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.684147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.684164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.684505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.684519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.684798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.684812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.685028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.685042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.685348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.685361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.685677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.685690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.686018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.686032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.686239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.686253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.686472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.686491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.686779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.686792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.687070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.687085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.687413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.687426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.687755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.687768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.688080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.688094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.688328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.688342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.285 [2024-10-07 14:51:46.688654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.285 [2024-10-07 14:51:46.688668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.285 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.688970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.688983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.689332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.689346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.689662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.689711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.690025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.690040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.690345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.690358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.690668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.690682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.690977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.690991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.691321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.691336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.691662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.691676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.692020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.692035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.692358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.692372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.692684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.692698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.693027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.693040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.693346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.693366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.693681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.693694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.693933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.693947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.694249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.694262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.694671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.694685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.694998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.695029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.695247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.695261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.695639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.695652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.695892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.695906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.696196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.696210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.696415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.696429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.696740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.696759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.697084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.697102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.697458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.697472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.697767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.697781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.698113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.698127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.698317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.698332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.698652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.698665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.698989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.699009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.699194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.699209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.699538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.699552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.699869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.699888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.700223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.700238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.700438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.700451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.700772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.700786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.701116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.701131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.286 [2024-10-07 14:51:46.701511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.286 [2024-10-07 14:51:46.701525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.286 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.701850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.701863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.702184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.702198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.702491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.702505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.702819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.702833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.703163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.703177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.703376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.703389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.703597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.703611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.703841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.703854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.704159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.704173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.704505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.704519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.704784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.704798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.705097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.705111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.705435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.705448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.705769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.705782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.706115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.706135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.706497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.706510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.706919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.706932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.707294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.707308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.707644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.707658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.707987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.708005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.708339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.708353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.708680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.708694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.709021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.709036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.709358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.709371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.709688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.709704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.710029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.710043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.710225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.710241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.710569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.710583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.710912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.710926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.711241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.711255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.711536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.711550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.711871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.711884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.712216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.712231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.712583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.712596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.712879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.712893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.713180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.713194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.713459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.713472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.713731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.713745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.714030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.714044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.287 [2024-10-07 14:51:46.714287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.287 [2024-10-07 14:51:46.714300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.287 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.714568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.714581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.714904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.714918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.715213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.715234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.715554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.715568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.715898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.715912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.716243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.716257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.716581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.716601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.716930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.716943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.717244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.717258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.717591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.717604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.717939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.717953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.718254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.718269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.718586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.718606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.718902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.718915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.719103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.719116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.719314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.719327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.719527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.719540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.719853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.719867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.720181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.720194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.720572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.720585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.720903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.720916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.721194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.721208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.721500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.721513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.721832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.721846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.722255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.722272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.722464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.722477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.722810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.722823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.723109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.723122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.723414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.723427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.723751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.723765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.724066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.724080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.724367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.724381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.724572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.724587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.724919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.724932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.288 [2024-10-07 14:51:46.725231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.288 [2024-10-07 14:51:46.725245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.288 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.725560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.725574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.725881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.725894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.726180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.726193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.726413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.726426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.726627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.726639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.726958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.726971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.727303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.727316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.727630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.727643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.727945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.727959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.728276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.728290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.728611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.728633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.728962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.728975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.729263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.729277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.729590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.729603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.729939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.729952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.730337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.730351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.730673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.730687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.731012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.731026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.731344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.731357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.731672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.731686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.731905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.731918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.732298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.732312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.732607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.732620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.732906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.732919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.733214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.733228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.733571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.733584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.733766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.733780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.734109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.734122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.734404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.734418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.734749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.734765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.735077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.735091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.735415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.735428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.735712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.735726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.736036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.736050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.736375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.736389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.736691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.289 [2024-10-07 14:51:46.736703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.289 qpair failed and we were unable to recover it. 00:41:23.289 [2024-10-07 14:51:46.737016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.737030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.737338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.737351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.737684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.737698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.738061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.738075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.738389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.738403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.738733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.738746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.739146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.739160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.739489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.739502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.739850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.739863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.740177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.740191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.740413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.740426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.740714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.740727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.741113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.741126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.741326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.741339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.741642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.741655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.741989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.742006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.742295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.742308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.742477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.742490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.742852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.742865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.743239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.743254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.743620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.743634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.744037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.744051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.744348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.744361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.744642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.744655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.744964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.744977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.745190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.745206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.745522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.745535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.745851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.745864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.746173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.746186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.746356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.746370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.746705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.746718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.747038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.747052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.747375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.747388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.747705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.747721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.748033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.748046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.748365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.748378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.748691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.748704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.748994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.749014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.749327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.749340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.749663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.290 [2024-10-07 14:51:46.749676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.290 qpair failed and we were unable to recover it. 00:41:23.290 [2024-10-07 14:51:46.749897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.749910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.750246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.750259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.750469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.750482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.750789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.750802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.751141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.751154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.751469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.751483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.751816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.751830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.752150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.752163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.752444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.752457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.752764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.752778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.753109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.753123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.753447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.753460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.753781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.753794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.754097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.754110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.754420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.754433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.754767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.754780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.754980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.754993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.755309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.755323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.755597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.755611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.755897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.755918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.756170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.756184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.756458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.756471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.756802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.756815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.757135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.757148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.757471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.757484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.757845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.757858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.758167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.758180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.758495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.758508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.758794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.758807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.759121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.759135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.759444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.759457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.759738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.759751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.760064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.760078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.760411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.760426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.760748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.760761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.761082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.761096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.761443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.761456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.761661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.761674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.761994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.762010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.762315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.291 [2024-10-07 14:51:46.762328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.291 qpair failed and we were unable to recover it. 00:41:23.291 [2024-10-07 14:51:46.762656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.762670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.762924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.762937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.763042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.763056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.763358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.763372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.763694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.763707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.764017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.764031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.764359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.764372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.764689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.764704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.765019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.765033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.765352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.765365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.765700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.765713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.766037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.766050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.766361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.766374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.766704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.766718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.767038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.767052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.767357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.767370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.767710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.767723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.768129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.768143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.768454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.768467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.768813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.768826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.769044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.769057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.769428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.769441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.769815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.769828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.770026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.770040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.770158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.770171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.770365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.770384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.770720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.770733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.771102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.771116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.771444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.771457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.771787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.771800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.772117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.772130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.772440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.772454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.772784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.772797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.773113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.773129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.773434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.773446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.773708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.773721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.774044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.774058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.774342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.774356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.774679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.774692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.292 [2024-10-07 14:51:46.775010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.292 [2024-10-07 14:51:46.775023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.292 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.775319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.775332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.775668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.775682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.776009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.776024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.776333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.776347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.776682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.776695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.776971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.776991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.777312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.777325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.777668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.777682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.777901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.777914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.778285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.778298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.778606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.778620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.778943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.778957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.779292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.779308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.779645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.779660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.779968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.779984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.780299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.780313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.780647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.780662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.780988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.781007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.781341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.781356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.781532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.781547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.781730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.781746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.782055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.782070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.782404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.782418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.782735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.782749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.783066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.783081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.783424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.783438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.783756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.783772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.784064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.784078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.784418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.784433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.784751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.784765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.785100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.785116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.785450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.785465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.785808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.785822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.786149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.786166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.293 [2024-10-07 14:51:46.786494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.293 [2024-10-07 14:51:46.786509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.293 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.786837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.786852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.787174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.787189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.787519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.787534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.787871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.787885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.788138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.788152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.788439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.788454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.788789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.788804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.789113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.789128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.789411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.789424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.789750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.789764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.790075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.790092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.790386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.790400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.790729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.790744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.791069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.791084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.791417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.791431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.791745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.791759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.792077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.792091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.792458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.792472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.792770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.792784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.793045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.793060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.793352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.793368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.793558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.793573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.793851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.793865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.794152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.794167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.794496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.794511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.794836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.794850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.795192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.795208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.795534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.795548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.795871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.795886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.796213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.796228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.796536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.796551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.796841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.796855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.797180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.797195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.797529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.797543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.797865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.797880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.798242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.798258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.798587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.798602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.798911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.798926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.799260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.799278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.799598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.294 [2024-10-07 14:51:46.799613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.294 qpair failed and we were unable to recover it. 00:41:23.294 [2024-10-07 14:51:46.799919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.799934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.800106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.800122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.800407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.800421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.800745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.800759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.801071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.801087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.801433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.801447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.801766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.801781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.801973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.801987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.802317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.802333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.802660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.802675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.803013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.803028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.803567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.803584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.803895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.803912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.804240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.804256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.804579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.804595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.804920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.804934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.805301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.805316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.805632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.805647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.805972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.805986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.806360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.806375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.806694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.806708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.807037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.807053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.807361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.807377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.807691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.807705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.808012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.808027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.808317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.808332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.808664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.808678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.809015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.809032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.809358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.809372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.809694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.809709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.810033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.810048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.810344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.810360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.810670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.810684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.810985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.811005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.811357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.811374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.811665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.811680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.812016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.812032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.295 [2024-10-07 14:51:46.812218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.295 [2024-10-07 14:51:46.812233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.295 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.812556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.812573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.812884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.812900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.813202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.813218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.813540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.813556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.813849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.813863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.814172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.814188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.814504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.814519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.814846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.814861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.815183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.815198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.815529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.815543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.815762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.815776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.816120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.816135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.816464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.816479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.816803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.816817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.817149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.817164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.817493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.817507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.817816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.817831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.818128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.818143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.818441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.818457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.818757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.818772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.819105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.819121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.819439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.819453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.819754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.819770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.820106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.820121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.820461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.820477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.820795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.820809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.821142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.821158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.821488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.821504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.821751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.821766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.822103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.822118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.822431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.822446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.822756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.822770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.823093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.823109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.823427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.823441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.823775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.823790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.824028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.824043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.824350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.824364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.824625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.824640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.824981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.824996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.296 qpair failed and we were unable to recover it. 00:41:23.296 [2024-10-07 14:51:46.825181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.296 [2024-10-07 14:51:46.825197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.825528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.825543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.825874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.825888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.826226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.826242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.826561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.826575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.826911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.826925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.827257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.827272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.827448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.827464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.827782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.827796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.828113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.828128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.828466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.828480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.828815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.828830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.829018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.829035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.829362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.829376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.829707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.829722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.830009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.830024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.830357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.830371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.830705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.830720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.831046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.831061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.831383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.831399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.831729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.831743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.832071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.832087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.832411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.832425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.832720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.832736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.833111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.833126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.833452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.833467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.833793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.833807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.834135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.834151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.834319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.834340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.834505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.834519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.834852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.834867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.835167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.835182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.835502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.835516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.297 [2024-10-07 14:51:46.835818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.297 [2024-10-07 14:51:46.835833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.297 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.836156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.836171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.836500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.836516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.836848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.836862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.837176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.837191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.837478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.837492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.837806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.837820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.838150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.838165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.838496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.838511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.838836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.838851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.839178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.839194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.839539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.839553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.839859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.839874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.840166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.840182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.840466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.840481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.840798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.840812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.841178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.841192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.841521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.841536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.841871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.841885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.842138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.842153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.842483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.842497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.842816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.842832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.843162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.843177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.843513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.843529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.843846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.843861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.844179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.844194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.844531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.844546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.844872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.844887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.845218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.845232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.845535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.845550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.845740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.845756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.846071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.846086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.846432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.846446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.846824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.846839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.847154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.847169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.847500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.847517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.847839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.847854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.848176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.848191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.848533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.848548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.848864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.848880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.298 [2024-10-07 14:51:46.849216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.298 [2024-10-07 14:51:46.849231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.298 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.849532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.849547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.849885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.849900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.850232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.850249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.850585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.850600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.850927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.850942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.851271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.851287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.851583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.851599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.851921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.851935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.852264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.852279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.852614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.852628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.852955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.852968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.853264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.853278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.853590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.853603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.853934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.853946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.854249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.854262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.854595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.854607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.854921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.854934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.855263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.855276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.855610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.855623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.855952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.855964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.856312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.856327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.856655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.856670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.857035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.857051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.857371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.857385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.857682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.857698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.858023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.858039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.858256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.858271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.858486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.858501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.858775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.858790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.859102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.859118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.859430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.859445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.859773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.859788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.860117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.860133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.860477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.860493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.860861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.860881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.861242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.861258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.299 qpair failed and we were unable to recover it. 00:41:23.299 [2024-10-07 14:51:46.861578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.299 [2024-10-07 14:51:46.861593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.861920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.861936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.862250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.862265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.862597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.862613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.862933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.862949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.863272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.863288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.863586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.863602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.863926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.863942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.864266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.864282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.864614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.864630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.864961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.864976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.865298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.865314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.865608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.865625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.865995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.866016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.866366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.866382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.866713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.866729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.866903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.866920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.867245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.867261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.867555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.867570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.867894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.867909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.868225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.868241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.868575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.868590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.868912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.868928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.869240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.869256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.869618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.869634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.869960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.869976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.870299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.870315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.870609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.870625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.870953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.870968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.871262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.871278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.871574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.871590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.871918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.871934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.872244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.872260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.872440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.872457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.872732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.872747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.873069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.873085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.873388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.873402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.873741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.873755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.874064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.874083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.874409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.874424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.300 qpair failed and we were unable to recover it. 00:41:23.300 [2024-10-07 14:51:46.874755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.300 [2024-10-07 14:51:46.874771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.875096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.875112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.875448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.875464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.875825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.875839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.876056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.876071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.876380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.876395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.876719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.876734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.877057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.877072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.877424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.877439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.877731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.877745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.878055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.878070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.878421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.878437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.878758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.878773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.879071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.879087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.879379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.879394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.879718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.879734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.880064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.880079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.880267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.880283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.880604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.880619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.880940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.880955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.881290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.881305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.881626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.881641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.881971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.881986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.882286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.882300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.882605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.882619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.882944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.882958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.883288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.883304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.883675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.883690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.884014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.884029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.884359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.884373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.884710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.884724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.885049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.885063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.885291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.885306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.885399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.885413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.885988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.886120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.886562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.886615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.886911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.886961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.887320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.887337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.887665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.887682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.887978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.887994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.301 [2024-10-07 14:51:46.888374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.301 [2024-10-07 14:51:46.888388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.301 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.888730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.888746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.889040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.889056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.889420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.889436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.889758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.889774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.890105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.890120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.890308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.890324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.890616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.890631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.890968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.890983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.891298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.891314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.891678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.891693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.892009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.892025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.892357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.892372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.892725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.892740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.893036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.893052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.893367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.893382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.893685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.893700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.893998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.894019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.894314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.894330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.894657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.894672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.895008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.895025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.895363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.895378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.895585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.895599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.895917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.895931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.896239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.896255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.896559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.896574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.896869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.896884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.897219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.897234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.897409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.897425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.897736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.897752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.898079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.898093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.898417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.898432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.898761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.898776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.899118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.899134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.899308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.899325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.302 [2024-10-07 14:51:46.899634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.302 [2024-10-07 14:51:46.899649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.302 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.899972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.899987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.900356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.900371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.900670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.900687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.901010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.901025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.901322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.901337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.901695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.901709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.902040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.902056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.902412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.902427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.902744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.902760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.903063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.903078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.903413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.903428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.903774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.903789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.904109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.904124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.904441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.904455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.904789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.904804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.905128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.905143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.905301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.905317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.905616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.905634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.905959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.905973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.906285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.906301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.906632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.906649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.906968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.906983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.907164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.907180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.907454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.907469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.907793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.907807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.908135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.908152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.908459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.908473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.908780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.908793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.909129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.909144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.909440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.909456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.909806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.909820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.910149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.910163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.910498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.910515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.910834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.910849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.911178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.911193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.911495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.911509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.911836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.911851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.912177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.912192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.912510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.912524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.303 [2024-10-07 14:51:46.912851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.303 [2024-10-07 14:51:46.912865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.303 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.913174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.913189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.913476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.913490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.913814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.913832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.914126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.914141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.914467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.914482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.914818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.914833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.915158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.915174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.915491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.915505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.915832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.915848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.916170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.916185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.916521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.916536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.916844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.916858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.917182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.917197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.917575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.917589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.917891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.917907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.918230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.918245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.918556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.918572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.918888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.918902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.919076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.919091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.919420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.919435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.919756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.919772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.920107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.920123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.920417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.920432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.920614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.920630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.920913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.920929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.921233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.921248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.921615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.921629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.921957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.921972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.922313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.922328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.922645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.922661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.922982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.922996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.923306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.923321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.923654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.923668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.923991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.924017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.924335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.924349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.924682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.924696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.925009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.925024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.925375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.925389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.925699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.925723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.926042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.304 [2024-10-07 14:51:46.926057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.304 qpair failed and we were unable to recover it. 00:41:23.304 [2024-10-07 14:51:46.926382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.926397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.926689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.926703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.927014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.927033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.927262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.927276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.927640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.927654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.927986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.928009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.928374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.928389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.928691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.928706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.929039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.929053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.929388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.929404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.929720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.929734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.930030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.930045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.930360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.930375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.930695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.930711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.931035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.931050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.931355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.931369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.931742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.931756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.932074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.932090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.932452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.932466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.932790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.932805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.933131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.933146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.933480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.933495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.933814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.933828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.934139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.934153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.934524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.934538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.934865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.934880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.935187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.935202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.935536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.935550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.935746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.935762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.936035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.936050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.936393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.936408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.936740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.936755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.937063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.937078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.937365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.937379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.937704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.937719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.938047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.938063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.938373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.938388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.938722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.938737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.939057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.939073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.305 [2024-10-07 14:51:46.939392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.305 [2024-10-07 14:51:46.939407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.305 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.939735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.939750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.939927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.939943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.940250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.940269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.940591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.940607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.940930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.940946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.941256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.941271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.941585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.941600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.941898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.941913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.942239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.942255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.942570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.942585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.942904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.942919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.943237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.943252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.943580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.943596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.943911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.943926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.944240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.944256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.944579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.944595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.944921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.944937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.945245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.945261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.945458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.945474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.945746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.945762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.946058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.946073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.946403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.946418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.946741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.946756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.947064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.947078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.947372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.947386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.947714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.947732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.948059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.948074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.948402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.948417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.948737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.948751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.949099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.949116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.949426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.949440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.949750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.306 [2024-10-07 14:51:46.949765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.306 qpair failed and we were unable to recover it. 00:41:23.306 [2024-10-07 14:51:46.950095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.950110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.950402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.950416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.950756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.950771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.951097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.951113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.951445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.951460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.951761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.951776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.952106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.952121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.952448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.952463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.952785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.952801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.953136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.953150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.953366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.953382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.953713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.953728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.954028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.954043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.954250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.954265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.954550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.954565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.954897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.954911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.955247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.955263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.955593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.955608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.955939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.955954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.956251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.956266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.956577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.956592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.956896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.956910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.957238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.957254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.957616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.957631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.957949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.957965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.958262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.958277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.958454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.958469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.958681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.958696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.958986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.959004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.959303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.959318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.959597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.959612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.959933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.959948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.960308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.960323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.960645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.960661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.960984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.961003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.961320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.961335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.961631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.961645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.961976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.961991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.962322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.962337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.307 [2024-10-07 14:51:46.962642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.307 [2024-10-07 14:51:46.962657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.307 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.962978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.962993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.963315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.963331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.963671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.963686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.963865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.963882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.964168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.964184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.964519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.964534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.964839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.964853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.965153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.965168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.965534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.965548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.965851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.965865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.966204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.966221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.966579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.966593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.966891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.966905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.967223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.967238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.967523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.967538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.967842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.967856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.968206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.968222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.968516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.968530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.968852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.968866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.969044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.969060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.969382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.969398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.969723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.969739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.970055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.970070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.970392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.970407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.970621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.970636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.970835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.970850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.971163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.971178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.971523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.971539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.971854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.971868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.972149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.972163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.972474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.972488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.972814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.972830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.308 [2024-10-07 14:51:46.973160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.308 [2024-10-07 14:51:46.973175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.308 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.973511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.973527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.973845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.973859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.974162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.974177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.974508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.974522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.974874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.974889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.975179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.975193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.975503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.975517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.975831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.975847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.976179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.976194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.976521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.976535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.976863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.976877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.977213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.977232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.977539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.977555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.977879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.977895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.978212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.978228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.978576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.978591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.978900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.978915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.979260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.979278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.979613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.979629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.979952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.979967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.980270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.980286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.980614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.980631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.980976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.980991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.981336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.981352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.981674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.981689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.981879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.981895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.982185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.982201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.982536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.982552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.635 [2024-10-07 14:51:46.982881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.635 [2024-10-07 14:51:46.982895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.635 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.983227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.983243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.983569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.983584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.983883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.983898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.984212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.984228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.984555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.984570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.984886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.984901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.985214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.985230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.985554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.985569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.985932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.985947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.986234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.986250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.986582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.986596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.986968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.986983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.987281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.987296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.987619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.987634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.987961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.987977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.988318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.988333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.988665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.988679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.989024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.989039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.989363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.989380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.989677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.989692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.990014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.990030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.990326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.990341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.990669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.990685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.990990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.991008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.991326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.991341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.991666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.991680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.992017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.992034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.992333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.992348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.992682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.992696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.993013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.993028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.993337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.993352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.993565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.993579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.993906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.993921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.994232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.994247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.994571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.994587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.994787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.994801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.995074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.995088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.995404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.995419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.995766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.636 [2024-10-07 14:51:46.995780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.636 qpair failed and we were unable to recover it. 00:41:23.636 [2024-10-07 14:51:46.996118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:46.996132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:46.996467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:46.996481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:46.996862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:46.996876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:46.997178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:46.997193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:46.997451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:46.997466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:46.997791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:46.997805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:46.998185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:46.998200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:46.998519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:46.998534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:46.998845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:46.998860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:46.999170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:46.999185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:46.999533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:46.999548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:46.999842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:46.999856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.000177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.000192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.000516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.000531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.000860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.000874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.001068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.001085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.001379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.001397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.001717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.001734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.002047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.002070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.002395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.002410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.002748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.002762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.003096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.003113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.003318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.003333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.003618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.003633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.003928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.003942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.004277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.004292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.004509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.004523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.004838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.004852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.005196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.005212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.005529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.005543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.005873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.005888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.006160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.006174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.006490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.006505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.006833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.006848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.007144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.007158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.007474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.007488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.007689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.007705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.008032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.008047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.008371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.008387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.008552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.637 [2024-10-07 14:51:47.008568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.637 qpair failed and we were unable to recover it. 00:41:23.637 [2024-10-07 14:51:47.008888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.008903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.009233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.009248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.009564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.009579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.009910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.009924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.010234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.010249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.010544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.010559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.010832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.010846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.011108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.011123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.011423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.011438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.011768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.011782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.012109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.012125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.012457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.012472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.012789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.012805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.013129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.013144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.013484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.013499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.013819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.013833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.014023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.014040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.014348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.014364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.014692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.014706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.015034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.015050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.015399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.015413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.015718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.015734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.016053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.016068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.016399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.016414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.016742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.016757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.017084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.017100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.017429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.017444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.017759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.017782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.018129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.018144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.018439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.018455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.018783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.018797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.019127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.019143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.019480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.019494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.019757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.019772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.020079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.020094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.020407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.020422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.020744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.020758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.021168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.021182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.021504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.021520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.638 [2024-10-07 14:51:47.021848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.638 [2024-10-07 14:51:47.021863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.638 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.022215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.022231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.022443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.022458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.022774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.022789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.023115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.023130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.023460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.023475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.023801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.023816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.024140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.024156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.024484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.024499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.024808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.024824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.025150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.025165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.025503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.025519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.025846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.025861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.026168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.026183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.026409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.026423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.026749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.026765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.027088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.027103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.027436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.027454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.027769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.027783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.028112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.028127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.028339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.028354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.028640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.028654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.028969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.028983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.029300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.029315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.029636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.029651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.029973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.029988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.030286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.030301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.030626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.030641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.030889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.030909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.031217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.031233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.031594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.031609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.031895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.031911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.032291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.032306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.032623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.639 [2024-10-07 14:51:47.032638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.639 qpair failed and we were unable to recover it. 00:41:23.639 [2024-10-07 14:51:47.032961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.032975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.033304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.033320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.033648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.033664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.034036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.034052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.034355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.034370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.034702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.034718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.035010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.035025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.035238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.035252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.035575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.035590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.035915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.035930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.036247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.036263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.036587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.036602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.036969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.036985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.037338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.037354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.037684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.037699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.038011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.038026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.038362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.038377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.038701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.038716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.039041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.039056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.039392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.039406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.039717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.039731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.040061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.040077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.040425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.040439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.040751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.040768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.041106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.041121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.041453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.041467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.041790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.041804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.042131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.042146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.042470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.042484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.042846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.042861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.043175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.043191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.043530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.043545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.043887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.043902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.044254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.044270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.044570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.044585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.044908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.044923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.045254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.045269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.045603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.045618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.045942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.045958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.046289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.640 [2024-10-07 14:51:47.046305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.640 qpair failed and we were unable to recover it. 00:41:23.640 [2024-10-07 14:51:47.046634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.046650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.046840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.046855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.047174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.047189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.047525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.047541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.047859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.047873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.048197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.048213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.048508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.048523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.048853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.048868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.049095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.049110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.049410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.049425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.049735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.049749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.050127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.050142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.050471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.050487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.050803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.050818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.051152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.051167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.051506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.051521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.051814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.051830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.052150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.052165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.052491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.052506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.052831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.052845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.053212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.053226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.053598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.053613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.053930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.053945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.054277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.054294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.054502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.054516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.054795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.054810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.055134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.055149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.055454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.055469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.055795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.055810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.056016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.056032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.056337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.056352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.056678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.056693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.057025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.057040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.057385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.057399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.057730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.057745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.058048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.058065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.058389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.058404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.058728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.058743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.059102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.059118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.059408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.641 [2024-10-07 14:51:47.059422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.641 qpair failed and we were unable to recover it. 00:41:23.641 [2024-10-07 14:51:47.059750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.059765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.060088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.060103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.060389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.060404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.060713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.060727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.060927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.060941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.061251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.061266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.061577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.061593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.061919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.061933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.062237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.062253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.062586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.062600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.062790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.062804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.063118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.063133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.063459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.063474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.063799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.063812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.064144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.064160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.064359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.064374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.064552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.064566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.064888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.064902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.065224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.065240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.065562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.065577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.065755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.065770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.066097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.066112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.066444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.066458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.066796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.066815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.067172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.067187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.067508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.067524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.067849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.067863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.068175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.068191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.068418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.068433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.068749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.068763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.069073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.069087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.069404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.069418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.069750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.069764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.070067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.070082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.070395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.070409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.070744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.070759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.071067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.071083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.071405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.071420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.071749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.071763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.072066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.642 [2024-10-07 14:51:47.072081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.642 qpair failed and we were unable to recover it. 00:41:23.642 [2024-10-07 14:51:47.072413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.072427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.072758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.072773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.073110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.073125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.073444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.073459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.073693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.073708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.074038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.074054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.074377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.074392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.074722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.074736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.075038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.075054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.075372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.075387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.075681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.075695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.076023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.076038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.076326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.076341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.076652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.076666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.076994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.077012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.077296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.077309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.077649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.077663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.077972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.077988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.078343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.078358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.078687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.078702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.079020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.079036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.079395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.079409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.079698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.079712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.080040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.080057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.080392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.080407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.080702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.080716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.080921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.080935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.081270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.081285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.081614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.081629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.081955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.081969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.082291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.082306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.082649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.082663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.082984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.082998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.083335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.083349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.083679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.083693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.084024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.084039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.643 qpair failed and we were unable to recover it. 00:41:23.643 [2024-10-07 14:51:47.084366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.643 [2024-10-07 14:51:47.084380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.084713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.084729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.085057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.085071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.085384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.085399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.085714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.085728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.086057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.086073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.086406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.086421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.086706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.086720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.087044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.087059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.087380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.087395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.087723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.087738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.088066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.088082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.088405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.088420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.088794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.088809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.088993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.089013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.089333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.089347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.089687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.089702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.090014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.090029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.090344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.090359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.090651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.090665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.090867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.090881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.091179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.091193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.091541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.091555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.091843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.091858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.092180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.092196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.092529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.092545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.092875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.092890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.093219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.093238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.093575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.093589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.093913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.093928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.094211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.094226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.094442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.094457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.094780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.094794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.095103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.095118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.095443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.095457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.095783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.095797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.096079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.096095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.096374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.096389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.096717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.644 [2024-10-07 14:51:47.096731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.644 qpair failed and we were unable to recover it. 00:41:23.644 [2024-10-07 14:51:47.097035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.097053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.097363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.097378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.097702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.097717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.097917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.097932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.098267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.098282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.098580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.098595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.098934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.098949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.099321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.099336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.099658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.099673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.099968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.099983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.100298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.100314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.100689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.100703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.101011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.101027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.101349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.101363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.101692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.101706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.102014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.102029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.102328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.102343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.102673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.102688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.102984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.102998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.103315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.103330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.103653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.103668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.103993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.104013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.104345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.104359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.104560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.104573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.104914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.104928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.105254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.105271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.105600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.105615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.105922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.105938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.106255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.106273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.106575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.106591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.106912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.106926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.107255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.107271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.107593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.107607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.107949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.107963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.108265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.108279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.108599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.108613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.108981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.108996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.109298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.109314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.109681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.645 [2024-10-07 14:51:47.109695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.645 qpair failed and we were unable to recover it. 00:41:23.645 [2024-10-07 14:51:47.110052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.110067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.110362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.110376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.110663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.110677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.111012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.111026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.111361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.111375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.111687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.111701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.111911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.111924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.112247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.112262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.112584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.112598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.112956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.112972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.113332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.113347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.113667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.113683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.114013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.114028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.114313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.114328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.114632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.114646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.114971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.114986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.115321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.115346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.115534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.115550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.115853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.115869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.116197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.116213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.116524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.116539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.116858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.116872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.117183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.117199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.117531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.117545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.117766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.117780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.118132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.118146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.118315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.118331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.118695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.118709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.119006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.119020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.119348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.119365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.119692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.119707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.120025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.120040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.120219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.120235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.120554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.120569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.120904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.120918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.121104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.121120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.121411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.121426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.121752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.121767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.122107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.122122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.122447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.646 [2024-10-07 14:51:47.122462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.646 qpair failed and we were unable to recover it. 00:41:23.646 [2024-10-07 14:51:47.122780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.122794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.123119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.123134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.123429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.123445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.123773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.123787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.124096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.124113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.124478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.124492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.124805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.124820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.125146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.125161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.125485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.125500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.125825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.125839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.126176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.126191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.126482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.126496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.126811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.126825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.127146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.127162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.127494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.127508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.127819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.127835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.128029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.128046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.128338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.128353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.128690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.128704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.129035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.129051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.129377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.129391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.129722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.129736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.130070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.130085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.130411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.130425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.130786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.130801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.131106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.131123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.131455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.131469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.131797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.131812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.132140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.132155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.132376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.132394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.132700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.132717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.133045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.133060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.133393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.133408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.133805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.133819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.134111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.134126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.134454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.134469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.134793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.134808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.135136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.135151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.135519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.135534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.135708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.647 [2024-10-07 14:51:47.135723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.647 qpair failed and we were unable to recover it. 00:41:23.647 [2024-10-07 14:51:47.136048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.136063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.136361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.136377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.136694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.136708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.137050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.137065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.137279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.137293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.137491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.137505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.137849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.137863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.138225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.138240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.138565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.138581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.138906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.138920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.139237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.139253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.139580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.139594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.139902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.139917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.140262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.140278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.140579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.140594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.140808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.140823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.141142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.141158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.141463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.141478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.141816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.141831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.142159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.142173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.142499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.142514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.142837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.142851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.143158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.143173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.143490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.143504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.143830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.143845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.144174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.144188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.144527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.144542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.144853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.144868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.145179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.145194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.145519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.145537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.145831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.145845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.146060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.146075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.146367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.146381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.146707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.146722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.147026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.147040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.648 [2024-10-07 14:51:47.147344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.648 [2024-10-07 14:51:47.147358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.648 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.147683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.147699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.147863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.147879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.148182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.148199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.148532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.148547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.148871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.148886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.149223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.149238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.149559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.149574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.149893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.149908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.150211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.150225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.150532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.150546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.150876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.150891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.151215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.151231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.151550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.151565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.151898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.151912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.152083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.152099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.152421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.152436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.152760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.152775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.153145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.153159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.153450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.153464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.153783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.153798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.154123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.154139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.154435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.154449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.154785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.154799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.155130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.155146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.155471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.155485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.155811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.155825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.156158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.156172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.156507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.156522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.156854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.156868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.157174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.157189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.157528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.157542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.157866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.157880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.158215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.158230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.158588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.158602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.158938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.158953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.159251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.159267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.159591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.159606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.159931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.159947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.160275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.160291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.649 [2024-10-07 14:51:47.160579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.649 [2024-10-07 14:51:47.160594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.649 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.160901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.160915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.161242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.161257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.161581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.161596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.161905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.161921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.162241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.162256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.162585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.162600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.162930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.162946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.163252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.163268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.163642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.163658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.163983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.163998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.164327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.164342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.164667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.164683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.165009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.165026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.165288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.165303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.165616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.165631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.165957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.165971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.166303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.166318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.166641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.166656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.166951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.166965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.167299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.167314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.167642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.167659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.167984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.167999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.168339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.168354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.168682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.168696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.169021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.169036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.169472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.169487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.169777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.169792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.170103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.170119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.170451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.170466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.170789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.170804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.171124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.171140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.171454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.171468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.171778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.171793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.172127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.172142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.172442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.172456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.172786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.172801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.173125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.173141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.173451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.173465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.173795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.173810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.650 [2024-10-07 14:51:47.174135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.650 [2024-10-07 14:51:47.174151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.650 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.174475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.174489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.174777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.174793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.175112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.175127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.175452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.175467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.175789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.175804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.176171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.176186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.176471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.176486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.176805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.176820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.177124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.177140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.177459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.177475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.177803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.177818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.178149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.178165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.178490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.178505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.178698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.178714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.178899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.178913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.179249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.179264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.179588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.179603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.179930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.179945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.180263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.180277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.180608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.180622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.180837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.180854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.181167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.181183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.181558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.181573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.181891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.181905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.182224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.182240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.182522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.182536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.183052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.183074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.183397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.183412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.183608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.183623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.183948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.183962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.184296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.184311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.184637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.184653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.184862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.184876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.185180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.185195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.185493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.185507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.185831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.185847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.186165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.186181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.186558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.186572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.186907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.186921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.651 [2024-10-07 14:51:47.187121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.651 [2024-10-07 14:51:47.187137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.651 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.187475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.187490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.187810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.187825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.188099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.188114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.188448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.188463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.188781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.188796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.188942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.188955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.189233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.189249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.189568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.189582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.189908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.189924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.190259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.190275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.190604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.190620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.190905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.190921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.191310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.191327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.191648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.191662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.191987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.192014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.192218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.192233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.192550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.192565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.192896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.192911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.193223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.193239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.193569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.193584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.193907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.193924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.194289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.194304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.194627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.194643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.194975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.194990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.195314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.195329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.195654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.195669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.195998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.196019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.196369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.196384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.196674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.196688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.197018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.197033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.197357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.197372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.197664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.197679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.198016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.198032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.198354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.198368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.198697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.652 [2024-10-07 14:51:47.198711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.652 qpair failed and we were unable to recover it. 00:41:23.652 [2024-10-07 14:51:47.198862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.198878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.199196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.199211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.199497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.199512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.199844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.199858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.200201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.200217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.200394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.200411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.200693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.200709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.201051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.201066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.201374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.201388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.201755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.201770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.202031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.202046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.202309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.202323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.202647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.202661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.203028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.203044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.203374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.203389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.203729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.203744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.204061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.204077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.204409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.204424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.204715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.204731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.204938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.204953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.205309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.205324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.205643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.205657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.205972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.205987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.206319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.206334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.206661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.206676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.206995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.207017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.207318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.207333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.207633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.207649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.207976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.207991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.208290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.208306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.208629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.208644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.208977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.208992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.209311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.209327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.209661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.209676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.210010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.210027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.210355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.210371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.210695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.210710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.211040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.211055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.211370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.211386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.211690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.653 [2024-10-07 14:51:47.211706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.653 qpair failed and we were unable to recover it. 00:41:23.653 [2024-10-07 14:51:47.212032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.212048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.212367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.212382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.212682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.212698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.213035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.213050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.213342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.213357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.213684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.213698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.213913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.213927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.214150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.214166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.214487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.214501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.214822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.214837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.215131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.215147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.215471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.215487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.215811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.215826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.216114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.216130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.216468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.216482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.216816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.216832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.217153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.217168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.217502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.217517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.217843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.217859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.218187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.218204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.218504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.218520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.218704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.218720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.218964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.218979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.219334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.219349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.219672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.219687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.220013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.220031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.220354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.220368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.220694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.220708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.221035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.221050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.221363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.221379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.221707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.221720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.222054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.222070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.222400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.222414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.222742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.222757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.223080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.223095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.223396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.223410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.223736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.223751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.224071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.224090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.224420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.224435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.654 [2024-10-07 14:51:47.224731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.654 [2024-10-07 14:51:47.224747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.654 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.225047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.225062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.225470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.225485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.225804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.225818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.226144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.226159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.226473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.226489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.226816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.226831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.227133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.227148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.227452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.227466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.227793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.227808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.228135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.228151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.228478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.228493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.228862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.228877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.229172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.229188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.229521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.229535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.229843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.229858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.230151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.230165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.230514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.230529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.230849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.230864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.231184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.231199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.231493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.231507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.231835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.231849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.232207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.232223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.232544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.232559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.232891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.232907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.233236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.233253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.233571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.233589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.233911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.233927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.234267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.234283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.234606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.234621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.234936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.234951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.235253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.235269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.235603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.235617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.235945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.235960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.236276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.236291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.236618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.236634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.236974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.236989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.237176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.237192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.237482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.237497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.655 qpair failed and we were unable to recover it. 00:41:23.655 [2024-10-07 14:51:47.237822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.655 [2024-10-07 14:51:47.237837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.238124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.238139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.238468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.238483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.238689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.238704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.239012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.239027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.239338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.239352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.239673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.239689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.240051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.240067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.240381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.240396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.240700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.240718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.241114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.241130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.241448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.241464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.241792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.241806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.242132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.242148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.242478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.242493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.242813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.242828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.243118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.243133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.243434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.243449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.243812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.243827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.244149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.244164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.244491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.244506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.244794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.244810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.245135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.245150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.245472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.245486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.245789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.245803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.245995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.246015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.246291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.246306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.246634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.246651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.246975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.246990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.247301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.247317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.247656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.247670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.247997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.248022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.248342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.248356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.248725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.248740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.248928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.248943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.249273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.249289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.249440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.249456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.249742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.249758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.250080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.250095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.656 qpair failed and we were unable to recover it. 00:41:23.656 [2024-10-07 14:51:47.250419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.656 [2024-10-07 14:51:47.250434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.250762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.250778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.251112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.251128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.251422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.251436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.251751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.251766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.252084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.252099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.252401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.252417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.252715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.252729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.252904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.252920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.253147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.253163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.253497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.253511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.253843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.253859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.254174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.254189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.254520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.254535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.254872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.254887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.255242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.255257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.255630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.255645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.255955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.255971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.256274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.256289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.256613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.256628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.256952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.256968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.257156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.257172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.257464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.257479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.257809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.257824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.258151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.258168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.258492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.258506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.258869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.258884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.259182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.259197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.259519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.259536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.259857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.259872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.260183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.260198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.260413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.260426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.260742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.260756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.261083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.261100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.261439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.261454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.261780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.261795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.262162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.262177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.657 qpair failed and we were unable to recover it. 00:41:23.657 [2024-10-07 14:51:47.262502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.657 [2024-10-07 14:51:47.262517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.262850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.262863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.263196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.263212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.263532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.263546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.263871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.263884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.264224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.264239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.264559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.264574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.264897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.264913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.265235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.265251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.265582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.265596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.265902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.265916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.266219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.266234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.266425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.266440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.266765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.266780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.267068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.267083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.267439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.267454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.267779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.267794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.268112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.268127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.268423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.268437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.268745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.268759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.269084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.269107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.269478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.269493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.269796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.269812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.270136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.270151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.270479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.270493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.270672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.270688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.271015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.271030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.271251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.271266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.271586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.271600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.271934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.271948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.272268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.272284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.272607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.272624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.272949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.272964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.273280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.273295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.273628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.273644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.273968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.273982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.274246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.274261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.274585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.274601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.274924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.274939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.275253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.658 [2024-10-07 14:51:47.275268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.658 qpair failed and we were unable to recover it. 00:41:23.658 [2024-10-07 14:51:47.275591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.275606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.275938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.275953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.276314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.276329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.276652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.276668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.276992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.277012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.277302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.277317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.277638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.277654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.277971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.277986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.278325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.278341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.278635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.278649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.278977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.278992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.279293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.279308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.279518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.279532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.279864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.279878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.280223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.280239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.280545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.280560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.280754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.280770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.281081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.281096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.281401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.281416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.281734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.281748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.282051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.282066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.282372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.282391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.282714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.282728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.283099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.283114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.283425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.283440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.283807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.283823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.284121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.284136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.284467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.284482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.284809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.284823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.285164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.285180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.285392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.285407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.285738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.285755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.286069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.286084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.286450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.286465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.286785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.286800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.287118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.287133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.287460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.287476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.287811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.287825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.288006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.288022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.288333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.659 [2024-10-07 14:51:47.288348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.659 qpair failed and we were unable to recover it. 00:41:23.659 [2024-10-07 14:51:47.288674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.288689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.289034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.289049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.289370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.289385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.289716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.289731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.290101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.290115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.290447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.290462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.290783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.290797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.291126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.291142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.291498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.291512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.291821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.291836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.292240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.292255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.292565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.292580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.292904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.292919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.293217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.293232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.293570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.293585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.293911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.293927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.294252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.294267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.294601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.294617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.294952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.294967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.295294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.295309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.295628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.295643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.295979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.295995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.296305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.296320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.296652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.296667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.296874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.296889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.297212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.297227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.297537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.297552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.297872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.297887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.298172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.298188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.298529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.298545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.298865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.298881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.299064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.299084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.299302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.299317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.299635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.299650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.299974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.299989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.300183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.300199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.300475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.300489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.300825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.300839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.301154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.301170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.301549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.660 [2024-10-07 14:51:47.301564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.660 qpair failed and we were unable to recover it. 00:41:23.660 [2024-10-07 14:51:47.301893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.301908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.302211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.302226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.302540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.302556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.302882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.302896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.303133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.303148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.303458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.303473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.303800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.303816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.304145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.304160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.304469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.304483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.304816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.304830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.305159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.305176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.305496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.305510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.305836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.305851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.306188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.306203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.306529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.306545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.306866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.306881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.307178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.307193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.307479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.307493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.307693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.307708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.308031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.308046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.308371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.308386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.308679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.308693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.309006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.309022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.309312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.309327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.309668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.309683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.310018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.310034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.310361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.310377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.310689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.310704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.311040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.311055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.311385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.311400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.311718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.311733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.312052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.312069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.312453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.312467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.312799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.312813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.313011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.313026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.661 [2024-10-07 14:51:47.313251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.661 [2024-10-07 14:51:47.313265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.661 qpair failed and we were unable to recover it. 00:41:23.662 [2024-10-07 14:51:47.313573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.662 [2024-10-07 14:51:47.313587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.662 qpair failed and we were unable to recover it. 00:41:23.662 [2024-10-07 14:51:47.313912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.662 [2024-10-07 14:51:47.313926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.662 qpair failed and we were unable to recover it. 00:41:23.662 [2024-10-07 14:51:47.314240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.662 [2024-10-07 14:51:47.314256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.662 qpair failed and we were unable to recover it. 00:41:23.662 [2024-10-07 14:51:47.314624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.662 [2024-10-07 14:51:47.314639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.662 qpair failed and we were unable to recover it. 00:41:23.662 [2024-10-07 14:51:47.314941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.662 [2024-10-07 14:51:47.314956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.662 qpair failed and we were unable to recover it. 00:41:23.662 [2024-10-07 14:51:47.315290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.662 [2024-10-07 14:51:47.315305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.662 qpair failed and we were unable to recover it. 00:41:23.662 [2024-10-07 14:51:47.315609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.662 [2024-10-07 14:51:47.315625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.662 qpair failed and we were unable to recover it. 00:41:23.662 [2024-10-07 14:51:47.315817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.662 [2024-10-07 14:51:47.315832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.662 qpair failed and we were unable to recover it. 00:41:23.662 [2024-10-07 14:51:47.316015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.662 [2024-10-07 14:51:47.316030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.662 qpair failed and we were unable to recover it. 00:41:23.662 [2024-10-07 14:51:47.316361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.662 [2024-10-07 14:51:47.316376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.662 qpair failed and we were unable to recover it. 00:41:23.662 [2024-10-07 14:51:47.316699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.662 [2024-10-07 14:51:47.316714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.662 qpair failed and we were unable to recover it. 00:41:23.662 [2024-10-07 14:51:47.317040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.662 [2024-10-07 14:51:47.317055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.662 qpair failed and we were unable to recover it. 00:41:23.662 [2024-10-07 14:51:47.317364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.662 [2024-10-07 14:51:47.317378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.662 qpair failed and we were unable to recover it. 00:41:23.662 [2024-10-07 14:51:47.317710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.662 [2024-10-07 14:51:47.317725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.662 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.318052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.318069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.318266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.318281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.318561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.318575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.318906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.318921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.319097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.319113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.319439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.319453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.319653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.319667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.320010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.320025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.320344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.320359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.320694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.320708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.321009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.321024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.321305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.321320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.321628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.321643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.321966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.321980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.322306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.322322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.322654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.322669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.322995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.323017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.323298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.323312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.323630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.323648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.323953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.323967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.324297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.324313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.985 [2024-10-07 14:51:47.324669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.985 [2024-10-07 14:51:47.324684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.985 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.325009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.325026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.325346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.325361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.325686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.325701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.326024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.326039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.326327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.326341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.326686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.326699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.327010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.327025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.327329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.327343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.327545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.327559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.327871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.327885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.328220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.328237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.328558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.328573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.328897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.328911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.329233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.329249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.329572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.329588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.329894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.329909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.330234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.330250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.330557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.330572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.330905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.330921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.331249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.331264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.331582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.331597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.331938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.331953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.332283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.332299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.332623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.332638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.333003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.333018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.333333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.333348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.333673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.333693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.333890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.333906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.334237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.334252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.334583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.334599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.334921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.334937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.335352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.335367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.335685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.335701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.336033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.336048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.336366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.336381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.336715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.336729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.337048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.337064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.337412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.337427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.337739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.337753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.986 [2024-10-07 14:51:47.338083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.986 [2024-10-07 14:51:47.338098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.986 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.338428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.338443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.338775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.338789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.339117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.339132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.339457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.339472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.339648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.339664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.339969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.339984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.340281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.340296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.340624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.340639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.340972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.340987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.341196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.341211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.341552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.341567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.341883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.341897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.342214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.342230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.342558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.342573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.342896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.342911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.343232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.343248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.343565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.343580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.343786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.343801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.344108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.344124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.344422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.344437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.344721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.344735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.345021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.345036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.345319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.345334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.345705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.345719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.346027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.346051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.346364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.346378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.346709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.346726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.347046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.347061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.347384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.347398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.347693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.347707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.348029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.348044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.348370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.348384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.348699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.348714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.349045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.349060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.349384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.349398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.349730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.349744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.350071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.350087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.350299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.350313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.350632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.987 [2024-10-07 14:51:47.350647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.987 qpair failed and we were unable to recover it. 00:41:23.987 [2024-10-07 14:51:47.350977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.350993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.351210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.351225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.351432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.351446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.351739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.351754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.352083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.352098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.352406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.352422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.352759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.352775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.353065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.353080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.353407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.353422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.353748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.353763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.354101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.354117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.354435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.354451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.354769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.354785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.355108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.355123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.355452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.355467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.355796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.355811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.356140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.356156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.356489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.356504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.356835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.356850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.357036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.357053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.357328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.357343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.357661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.357675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.357969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.357983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.358320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.358335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.358665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.358679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.358940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.358954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.359265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.359280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.359608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.359626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.359878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.359893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.360209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.360224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.360554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.360568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.360896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.360911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.361235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.361250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.361614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.361629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.361928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.361942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.362262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.362276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.362567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.362581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.362889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.362903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.363198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.363213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.363511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.988 [2024-10-07 14:51:47.363525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.988 qpair failed and we were unable to recover it. 00:41:23.988 [2024-10-07 14:51:47.363828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.363842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.364054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.364069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.364393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.364408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.364736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.364751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.365068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.365082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.365404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.365423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.365750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.365766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.366072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.366087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.366411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.366424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.366627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.366642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.366961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.366976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.367271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.367286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.367615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.367630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.367963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.367978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.368351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.368367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.368691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.368707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.368915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.368931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.369255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.369271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.369600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.369615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.369941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.369956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.370317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.370332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.370655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.370671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.370995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.371014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.371331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.371346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.371675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.371690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.371912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.371926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.372273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.372288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.372628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.372646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.373010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.373025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.373344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.373358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.373691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.373705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.374095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.374111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.374423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.374438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.989 [2024-10-07 14:51:47.374763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.989 [2024-10-07 14:51:47.374779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.989 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.375106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.375121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.375450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.375466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.375788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.375803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.376121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.376135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.376464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.376479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.376808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.376824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.377153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.377168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.377496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.377511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.377845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.377859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.378194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.378208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.378525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.378539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.378863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.378877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.379211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.379228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.379557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.379571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.379881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.379896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.380269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.380284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.380598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.380613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.380935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.380950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.381324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.381339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.381659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.381674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.382015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.382031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.382345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.382360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.382690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.382705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.383016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.383031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.383365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.383381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.383717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.383732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.384051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.384066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.384428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.384442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.384777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.384791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.385129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.385144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.385469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.385484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.385813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.385828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.386140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.386154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.386485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.386502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.386823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.386838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.387167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.387182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.387515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.387530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.387858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.387872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.990 [2024-10-07 14:51:47.388252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.990 [2024-10-07 14:51:47.388267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.990 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.388544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.388558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.388894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.388908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.389226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.389241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.389564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.389578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.389902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.389917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.390121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.390137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.390464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.390479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3307712 Killed "${NVMF_APP[@]}" "$@" 00:41:23.991 [2024-10-07 14:51:47.390695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.390709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.391008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.391024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.391341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.391356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 14:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:41:23.991 [2024-10-07 14:51:47.391532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.391552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.391892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.391906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 14:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:41:23.991 [2024-10-07 14:51:47.392220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.392236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 14:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 14:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:23.991 [2024-10-07 14:51:47.392574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.392588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 14:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:23.991 [2024-10-07 14:51:47.392900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.392916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.393230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.393245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.393558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.393574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.393893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.393908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.394289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.394307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.394611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.394626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.394978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.394993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.395276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.395291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.395610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.395626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.395947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.395962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.396285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.396301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.396614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.396629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.396966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.396981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.397373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.397389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.397704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.397720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.398044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.398061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.398414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.398429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.398754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.398769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.399115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.399130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.399469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.399483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.399829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.399843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 [2024-10-07 14:51:47.400173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.400189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.991 14:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3308588 00:41:23.991 [2024-10-07 14:51:47.400413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.991 [2024-10-07 14:51:47.400428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.991 qpair failed and we were unable to recover it. 00:41:23.992 14:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3308588 00:41:23.992 [2024-10-07 14:51:47.400781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.400797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 14:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 14:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3308588 ']' 00:41:23.992 [2024-10-07 14:51:47.401127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.401142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 14:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:23.992 [2024-10-07 14:51:47.401479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.401494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 14:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 14:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:23.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:23.992 [2024-10-07 14:51:47.401824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.401839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 14:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:23.992 [2024-10-07 14:51:47.402056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.402073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 14:51:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:23.992 [2024-10-07 14:51:47.402423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.402439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.402763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.402778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.403029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.403044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.403445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.403460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.403766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.403782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.404103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.404118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.404447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.404462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.404636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.404653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.404963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.404978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.405305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.405320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.405649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.405664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.406029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.406044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.406346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.406361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.406703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.406718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.407045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.407060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.407369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.407384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.407712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.407728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.408056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.408071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.408437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.408452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.408756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.408769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.409063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.409078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.409394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.409410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.409582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.409599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.409924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.409940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.410235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.410251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.410575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.410590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.410794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.410809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.411023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.411039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.411350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.411364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.411585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.411600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.411921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.992 [2024-10-07 14:51:47.411936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.992 qpair failed and we were unable to recover it. 00:41:23.992 [2024-10-07 14:51:47.412298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.412314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.412608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.412624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.412940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.412956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.413172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.413188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.413461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.413476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.413808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.413823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.414152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.414169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.414400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.414420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.414719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.414734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.415069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.415085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.415403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.415418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.415732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.415747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.416073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.416090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.416407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.416423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.416749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.416764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.417068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.417084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.417431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.417447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.417773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.417788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.418061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.418077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.418380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.418396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.418683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.418698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.419020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.419036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.419417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.419433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.419758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.419773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.420069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.420086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.420303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.420318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.420625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.420640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.420959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.420975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.421291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.421309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.421646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.421661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.421995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.422015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.422263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.422280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.422611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.422627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.422956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.422972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.423334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.423350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.423678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.423693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.423908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.423923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.424179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.424196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.424552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.424569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.424901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.993 [2024-10-07 14:51:47.424916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.993 qpair failed and we were unable to recover it. 00:41:23.993 [2024-10-07 14:51:47.425243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.425259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.425648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.425663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.425880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.425894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.426216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.426231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.426544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.426567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.426784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.426799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.426989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.427008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.427319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.427335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.427642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.427656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.427973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.427989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.428254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.428269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.428600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.428615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.428935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.428950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.429190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.429205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.429537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.429553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.429927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.429943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.430173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.430188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.430585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.430601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.430780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.430794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.431116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.431132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.431506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.431521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.431861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.431877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.432112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.432127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.432466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.432482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.432803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.432819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.433175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.433190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.433492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.433508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.433844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.433860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.434189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.434205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.434496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.434511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.434857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.434873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.435208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.435224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.994 qpair failed and we were unable to recover it. 00:41:23.994 [2024-10-07 14:51:47.435411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.994 [2024-10-07 14:51:47.435427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.435587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.435604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.435789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.435806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.436125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.436140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.436451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.436466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.436797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.436812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.437158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.437174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.437503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.437519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.437821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.437836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.438168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.438185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.438398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.438414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.438626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.438641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.438995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.439016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.439346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.439361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.439546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.439560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.439743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.439760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.440086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.440103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.440419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.440435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.440609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.440625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.440810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.440826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.441170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.441187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.441472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.441487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.441684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.441700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.442045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.442061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.442392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.442406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.442628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.442645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.442860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.442875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.443190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.443207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.443519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.443536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.443747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.443762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.444076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.444092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.444379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.444395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.444707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.444723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.445054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.445071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.445397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.445412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.445592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.445607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.445907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.445922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.446252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.446269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.446637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.446652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.446992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.447013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.995 qpair failed and we were unable to recover it. 00:41:23.995 [2024-10-07 14:51:47.447347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.995 [2024-10-07 14:51:47.447362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.447756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.447772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.447990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.448015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.448356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.448372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.448723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.448739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.448949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.448964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.449254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.449270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.449617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.449634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.449944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.449961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.450309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.450325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.450660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.450676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.451012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.451029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.451094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.451109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.451407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.451423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.451768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.451784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.452114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.452132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.452514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.452529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.452867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.452883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.453200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.453219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.453549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.453564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.453897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.453913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.454266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.454285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.454627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.454643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.454977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.454994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.455228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.455244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.455637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.455654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.455832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.455848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.456166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.456183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.456514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.456531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.456832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.456848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.457175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.457192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.457384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.457401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.457741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.457758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.457966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.457981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.458373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.458390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.458732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.458749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.459045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.459061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.459392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.459408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.459572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.459587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.459808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.459821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.996 qpair failed and we were unable to recover it. 00:41:23.996 [2024-10-07 14:51:47.460022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.996 [2024-10-07 14:51:47.460038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.460224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.460240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.460572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.460588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.460891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.460905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.461107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.461124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.461428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.461444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.461780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.461794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.462138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.462156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.462497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.462512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.462851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.462865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.463180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.463195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.463406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.463423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.463761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.463778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.464063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.464079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.464401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.464416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.464584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.464603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.464982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.464998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.465343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.465359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.465671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.465686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.466014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.466030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.466296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.466310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.466636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.466651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.467007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.467022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.467383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.467398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.467736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.467752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.468079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.468095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.468405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.468419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.468731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.468745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.469068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.469084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.469427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.469442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.469774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.469789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.470090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.470105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.470370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.470384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.470690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.470706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.471069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.471084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.471431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.471447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.471649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.471665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.471960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.471975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.472322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.472338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.472524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.472540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.997 [2024-10-07 14:51:47.472874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.997 [2024-10-07 14:51:47.472889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.997 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.473177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.473193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.473539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.473555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.473927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.473943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.474279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.474295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.474604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.474620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.474934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.474949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.475268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.475284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.475433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.475449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.475768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.475784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.476007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.476023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.476376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.476390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.476711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.476726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.476791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.476805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.477147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.477162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.477476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.477495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.477827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.477841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.478030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.478045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.478275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.478290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.478618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.478633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.478967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.478983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.479302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.479317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.479513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.479529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.479736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.479751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.479966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.479981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.480312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.480327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.480658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.480673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.481028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.481044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.481403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.481422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.481750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.481766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.482103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.482119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.482465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.482481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.482775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.482789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.998 [2024-10-07 14:51:47.483109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.998 [2024-10-07 14:51:47.483124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.998 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.483459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.483474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.483834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.483850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.484199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.484214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.484551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.484566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.484892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.484908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.485229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.485244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.485583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.485598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.485911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.485925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.486278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.486294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.486615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.486630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.486958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.486973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.487345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.487360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.487690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.487705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.488014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.488029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.488348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.488364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.488384] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:41:23.999 [2024-10-07 14:51:47.488482] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:23.999 [2024-10-07 14:51:47.488578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.488593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.488772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.488788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.489100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.489114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.489466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.489480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.489852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.489868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.490162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.490180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.490369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.490386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.490711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.490727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.491058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.491074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.491412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.491428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.491761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.491776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.492106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.492121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.492414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.492429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.492758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.492773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.493092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.493107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.493429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.493444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.493770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.493785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.494114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.494130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.494466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.494482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.494810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.494825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.495199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.495215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:23.999 qpair failed and we were unable to recover it. 00:41:23.999 [2024-10-07 14:51:47.495398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:23.999 [2024-10-07 14:51:47.495415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.495740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.495757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.496116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.496132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.496464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.496480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.496800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.496815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.497147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.497162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.497504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.497520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.497844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.497860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.498174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.498189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.498493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.498508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.498889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.498905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.499116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.499133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.499454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.499470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.499806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.499822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.500152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.500170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.500497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.500513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.500846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.500862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.501210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.501226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.501450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.501466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.501581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.501596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.501826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.501841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.502025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.502041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.502351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.502367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.502684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.502700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.502892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.502913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.503205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.503221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.503564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.503579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.503911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.503926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.504304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.504320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.504659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.504675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.505017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.505034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.505255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.505270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.505468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.505484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.505813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.505827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.506133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.506148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.506485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.506500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.506827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.506842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.507071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.507087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.507421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.507438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.507765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.507780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.000 qpair failed and we were unable to recover it. 00:41:24.000 [2024-10-07 14:51:47.508109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.000 [2024-10-07 14:51:47.508125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.508459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.508474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.508814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.508829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.509036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.509052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.509379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.509395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.509707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.509722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.510051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.510067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.510303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.510317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.510634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.510648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.510975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.510990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.511199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.511214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.511394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.511408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.511708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.511723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.512036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.512053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.512244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.512259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.512545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.512561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.512861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.512875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.513199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.513215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.513401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.513416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.513664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.513679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.513973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.513987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.514326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.514341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.514655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.514671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.514965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.514979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.515344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.515363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.515716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.515731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.516055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.516070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.516379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.516394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.516604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.516620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.516921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.516936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.517255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.517271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.517456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.517470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.517818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.517833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.518046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.518061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.518391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.518405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.518750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.518764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.519077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.519094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.519191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.519205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.519540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.519554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.519879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.519894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.001 [2024-10-07 14:51:47.520118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.001 [2024-10-07 14:51:47.520134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.001 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.520351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.520365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.520693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.520708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.521113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.521129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.521467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.521482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.521714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.521728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.522066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.522080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.522398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.522414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.522737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.522752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.523100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.523116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.523318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.523333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.523631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.523645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.523968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.523982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.524201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.524218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.524546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.524561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.524898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.524914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.525248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.525263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.525608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.525623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.525918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.525933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.526290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.526306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.526614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.526629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.526932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.526947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.527146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.527162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.527348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.527363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.527542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.527560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.527888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.527904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.528204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.528220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.528541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.528556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.528878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.528895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.529083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.529100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.529292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.529308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.529635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.529650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.529968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.529984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.530315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.530331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.530624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.530640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.530979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.530993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.531298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.531314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.002 [2024-10-07 14:51:47.531660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.002 [2024-10-07 14:51:47.531675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.002 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.531900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.531915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.532275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.532291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.532470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.532485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.532816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.532832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.533121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.533137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.533475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.533489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.533823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.533837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.534167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.534183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.534527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.534543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.534865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.534881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.535170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.535186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.535460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.535474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.535646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.535661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.535933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.535947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.536150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.536165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.536488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.536504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.536846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.536860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.537168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.537185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.537379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.537395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.537717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.537732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.538039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.538054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.538383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.538398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.538722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.538736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.538938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.538952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.539286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.539301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.539636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.539652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.539979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.539997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.540333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.540348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.540640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.540656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.540956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.540972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.541150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.541165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.541486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.541501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.541841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.541856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.542150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.542165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.542492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.542508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.542837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.542852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.543182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.543198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.003 [2024-10-07 14:51:47.543532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.003 [2024-10-07 14:51:47.543546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.003 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.543884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.543898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.544300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.544316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.544647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.544661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.544996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.545024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.545346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.545361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.545559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.545575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.545890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.545905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.546202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.546217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.546561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.546576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.546875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.546889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.547169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.547184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.547567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.547582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.547909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.547925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.548254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.548270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.548570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.548585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.548928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.548944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.549266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.549281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.549544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.549558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.549899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.549915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.550251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.550267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.550603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.550618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.550946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.550962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.551299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.551316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.551652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.551667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.551926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.551941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.552234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.552250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.552596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.552611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.552936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.552952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.553329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.553349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.553671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.553686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.553990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.554011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.554358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.554374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.554714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.554729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.555066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.555082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.555415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.555428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.555808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.555822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.556142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.556158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.556389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.556405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.556705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.556721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.556978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.004 [2024-10-07 14:51:47.556992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.004 qpair failed and we were unable to recover it. 00:41:24.004 [2024-10-07 14:51:47.557340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.557355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.557688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.557703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.558008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.558024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.558348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.558362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.558696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.558712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.559023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.559038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.559348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.559363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.559700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.559716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.559898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.559913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.560198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.560214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.560504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.560519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.560836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.560851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.561178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.561194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.561530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.561545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.561904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.561921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.562137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.562152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.562533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.562549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.562880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.562895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.563129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.563145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.563485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.563500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.563835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.563850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.564176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.564191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.564528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.564544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.564880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.564894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.565214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.565229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.565555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.565570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.565884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.565899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.566235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.566250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.566621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.566639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.566964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.566980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.567312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.567329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.567663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.567679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.568013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.568028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.568361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.568375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.568718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.568733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.569047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.569062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.569432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.005 [2024-10-07 14:51:47.569446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.005 qpair failed and we were unable to recover it. 00:41:24.005 [2024-10-07 14:51:47.569741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.569757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.570120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.570134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.570462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.570480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.570819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.570834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.571021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.571037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.571379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.571393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.571723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.571738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.572021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.572036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.572335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.572350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.572692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.572706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.573010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.573025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.573377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.573392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.573716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.573731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.574076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.574091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.574423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.574437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.574749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.574765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.574981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.574996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.575316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.575332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.575661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.575677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.576010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.576027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.576333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.576347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.576631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.576647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.576976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.576992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.577970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.578014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.578225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.578241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.578564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.578579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.578915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.578930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.579218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.579234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.579593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.579607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.579942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.579957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.580087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.580102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.580453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.580471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.580775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.580789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.581148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.581163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.581323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.581339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.581514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.581528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.581763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.581779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.582098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.582114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.582438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.582453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.582784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.006 [2024-10-07 14:51:47.582798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.006 qpair failed and we were unable to recover it. 00:41:24.006 [2024-10-07 14:51:47.583124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.583140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.583428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.583443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.583754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.583770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.584022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.584037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.584439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.584454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.584763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.584778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.584989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.585013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.585282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.585296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.585608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.585623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.585912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.585927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.586252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.586267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.586588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.586603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.587666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.587704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.588047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.588067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.588387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.588402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.588728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.588743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.589072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.589088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.589405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.589419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.589746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.589762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.589964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.589978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.590302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.590319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.590512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.590529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.590807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.590822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.591148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.591164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.591352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.591367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.591672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.591686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.592012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.592028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.592365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.592380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.592749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.592763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.593103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.593118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.593320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.593334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.593626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.593643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.593976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.593990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.594236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.594250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.594561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.007 [2024-10-07 14:51:47.594577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.007 qpair failed and we were unable to recover it. 00:41:24.007 [2024-10-07 14:51:47.594900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.594914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.595245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.595261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.595593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.595607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.595922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.595937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.596153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.596168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.596367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.596381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.596694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.596709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.597035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.597051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.597379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.597393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.597722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.597737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.598074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.598090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.598412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.598426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.598764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.598778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.599113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.599128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.599447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.599463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.599771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.599786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.600115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.600131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.600398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.600411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.600686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.600700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.600861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.600875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.601175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.601190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.601511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.601527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.601822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.601836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.602065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.602080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.602381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.602395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.602722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.602737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.603045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.603059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.603378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.603394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.603574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.603589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.604032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.604136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003c0080 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.604433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.604480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003c0080 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.604819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.604859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003c0080 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.605194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.605211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.605526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.605540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.605841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.605857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.606155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.606171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.606360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.606376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.606725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.606741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.607080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.607095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.607395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.008 [2024-10-07 14:51:47.607409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.008 qpair failed and we were unable to recover it. 00:41:24.008 [2024-10-07 14:51:47.607718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.607734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.608056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.608071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.608388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.608404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.608717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.608731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.609071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.609086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.609417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.609431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.609727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.609741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.610052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.610067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.610237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.610251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.610611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.610626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.610933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.610949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.611162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.611177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.611358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.611372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.611740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.611755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.612086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.612101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.612424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.612439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.612778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.612794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.613114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.613129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.613473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.613491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.613811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.613826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.614046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.614062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.614396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.614410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.614749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.614764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.615019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.615034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.615320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.615335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.615505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.615519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.615845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.615859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.616199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.616214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.616414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.616430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.616626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.616640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.616964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.616979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.617312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.617328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.617589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.617604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.618009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.618024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.618349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.618363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.618695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.618709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.009 [2024-10-07 14:51:47.619037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.009 [2024-10-07 14:51:47.619055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.009 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.619367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.619381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.619714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.619729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.620032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.620047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.620361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.620375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.620701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.620715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.621053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.621067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.621401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.621416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.621743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.621758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.622096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.622112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.622418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.622433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.622761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.622774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.623109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.623125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.623435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.623450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.623754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.623770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.624073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.624088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.624416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.624431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.624620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.624635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.624950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.624964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.625265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.625280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.625600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.625615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.625945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.625960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.626288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.626304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.626493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.626510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.626790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.626806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.627129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.627145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.627474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.627489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.627818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.627836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.628151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.628167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.628491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.628507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.628843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.628858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.629169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.629184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.629520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.629535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.629907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.629923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.630121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.630136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.630478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.630494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.630829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.630843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.631762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.631792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.632105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.632123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.632474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.632489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.010 [2024-10-07 14:51:47.632813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.010 [2024-10-07 14:51:47.632827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.010 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.633159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.633174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.633507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.633522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.633826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.633842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.634009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.634024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.634353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.634369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.634701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.634717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.635037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.635052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.635387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.635401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.635726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.635742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.635975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.635989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.636326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.636342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.636665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.636681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.637018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.637033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.637836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.637864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.638173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.638199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.638523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.638537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.638861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.638877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.639206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.639222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.639308] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:24.011 [2024-10-07 14:51:47.639591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.639607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.639903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.639918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.640255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.640271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.640488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.640502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.640822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.640838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.641175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.641189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.641526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.641540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.641876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.641900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.642208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.642226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.642534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.642550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.642876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.642890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.643076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.643094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.643208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.643231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.643564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.643592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.643935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.643954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.644282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.644298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.011 [2024-10-07 14:51:47.644532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.011 [2024-10-07 14:51:47.644546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.011 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.644861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.644876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.645176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.645192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.645401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.645416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.645747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.645762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.646097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.646116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.646461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.646476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.646796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.646810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.647145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.647161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.647571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.647585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.647896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.647912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.648234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.648250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.648438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.648452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.648710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.648725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.648939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.648954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.649270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.649285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.649613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.649627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.649959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.649974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.650202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.650217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.650558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.650573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.650947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.650962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.651324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.651339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.651547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.651561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.651781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.651797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.652136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.652151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.652394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.652409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.652734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.652748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.653055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.653073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.653307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.653322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.653620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.653635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.653959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.653973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.654284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.654300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.654640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.654655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.654715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.654729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.655023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.655041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.655382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.655396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.655592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.655606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.655778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.655794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.656172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.656189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.656322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.656337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.012 qpair failed and we were unable to recover it. 00:41:24.012 [2024-10-07 14:51:47.656543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.012 [2024-10-07 14:51:47.656558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.656835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.656850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.657170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.657186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.657419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.657433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.657671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.657686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.658009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.658026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.658327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.658342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.658614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.658628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.658945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.658961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.659286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.659301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.659629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.659644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.659970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.659986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.660310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.660326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.660518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.660532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.660819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.660834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.661153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.661168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.661500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.661515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.661881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.661897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.662236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.662252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.662570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.662586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.662883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.662898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.663219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.663235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.663569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.663585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.663912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.663927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.664255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.664273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.664600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.664616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.664943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.664958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.665897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.665928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.666242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.666259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.667321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.667353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.667533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.667550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.667835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.667850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.668134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.668149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.668488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.668503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.013 [2024-10-07 14:51:47.668831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.013 [2024-10-07 14:51:47.668847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.013 qpair failed and we were unable to recover it. 00:41:24.313 [2024-10-07 14:51:47.669132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.313 [2024-10-07 14:51:47.669149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.313 qpair failed and we were unable to recover it. 00:41:24.313 [2024-10-07 14:51:47.669465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.313 [2024-10-07 14:51:47.669481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.313 qpair failed and we were unable to recover it. 00:41:24.313 [2024-10-07 14:51:47.669802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.313 [2024-10-07 14:51:47.669819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.313 qpair failed and we were unable to recover it. 00:41:24.313 [2024-10-07 14:51:47.670140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.313 [2024-10-07 14:51:47.670157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.313 qpair failed and we were unable to recover it. 00:41:24.313 [2024-10-07 14:51:47.670473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.313 [2024-10-07 14:51:47.670488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.313 qpair failed and we were unable to recover it. 00:41:24.313 [2024-10-07 14:51:47.670826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.313 [2024-10-07 14:51:47.670841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.313 qpair failed and we were unable to recover it. 00:41:24.313 [2024-10-07 14:51:47.671022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.313 [2024-10-07 14:51:47.671038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.313 qpair failed and we were unable to recover it. 00:41:24.313 [2024-10-07 14:51:47.671329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.313 [2024-10-07 14:51:47.671344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.313 qpair failed and we were unable to recover it. 00:41:24.313 [2024-10-07 14:51:47.671665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.313 [2024-10-07 14:51:47.671681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.313 qpair failed and we were unable to recover it. 00:41:24.313 [2024-10-07 14:51:47.672016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.313 [2024-10-07 14:51:47.672032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.313 qpair failed and we were unable to recover it. 00:41:24.313 [2024-10-07 14:51:47.672782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.313 [2024-10-07 14:51:47.672812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.313 qpair failed and we were unable to recover it. 00:41:24.313 [2024-10-07 14:51:47.672996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.313 [2024-10-07 14:51:47.673018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.313 qpair failed and we were unable to recover it. 00:41:24.313 [2024-10-07 14:51:47.673360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.313 [2024-10-07 14:51:47.673376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.313 qpair failed and we were unable to recover it. 00:41:24.313 [2024-10-07 14:51:47.673706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.313 [2024-10-07 14:51:47.673721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.313 qpair failed and we were unable to recover it. 00:41:24.313 [2024-10-07 14:51:47.674056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.674071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.674425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.674440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.674616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.674631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.674943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.674959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.675266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.675281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.675478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.675494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.675771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.675785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.676034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.676049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.676842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.676871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.677188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.677204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.678014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.678041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.678362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.678378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.679130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.679158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.679498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.679514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.679843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.679859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.680215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.680231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.680423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.680437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.680752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.680767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.681067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.681082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.681398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.681413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.681602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.681617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.681949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.681964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.682304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.682318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.682624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.682640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.683010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.683026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.683339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.683363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.683543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.683557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.683875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.683889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.684221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.684237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.684564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.684580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.684802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.684816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.685150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.685167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.685485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.685501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.685820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.685836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.686161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.686178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.686514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.686530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.686859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.686877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.687279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.687295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.314 [2024-10-07 14:51:47.687618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.314 [2024-10-07 14:51:47.687633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.314 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.687963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.687977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.688304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.688321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.688511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.688526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.688815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.688829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.689253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.689269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.689475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.689489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.689756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.689772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.690066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.690081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.690238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.690252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.690584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.690599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.690929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.690945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.691281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.691297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.691627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.691642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.691967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.691982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.692296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.692312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.692643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.692658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.692954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.692969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.693233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.693249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.693573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.693589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.693915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.693930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.694109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.694127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.694409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.694424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.694769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.694784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.695107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.695123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.695461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.695477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.695802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.695818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.696146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.696162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.696490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.696506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.696802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.696818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.696995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.697019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.697321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.697336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.697643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.697659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.697957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.697974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.698358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.698374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.698701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.698719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.698897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.698912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.699224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.699241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.699563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.699581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.315 [2024-10-07 14:51:47.699903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.315 [2024-10-07 14:51:47.699919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.315 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.700247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.700262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.700603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.700619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.700940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.700955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.701214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.701229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.701554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.701570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.701903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.701917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.702254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.702269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.702579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.702594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.702918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.702933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.703306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.703322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.703603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.703618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.703949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.703964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.704156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.704171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.704463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.704478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.708017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.708049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.708397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.708416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.708788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.708808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.709121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.709142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.709435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.709455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.709765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.709783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.709992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.710015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.710413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.710433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.710762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.710782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.711112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.711134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.711430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.711447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.711808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.711824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.712134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.712155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.712468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.712485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.712809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.712824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.713130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.713148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.713473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.713491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.713781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.713796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.714137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.714154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.714458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.714475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.714819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.714834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.715143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.715160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.715483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.715501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.715809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.715825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.716168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.716189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.316 [2024-10-07 14:51:47.716531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.316 [2024-10-07 14:51:47.716552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.316 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.716882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.716897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.717235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.717253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.717468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.717485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.717871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.717888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.718230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.718252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.718555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.718577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.718787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.718806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.719049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.719070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.719467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.719483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.719783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.719799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.720139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.720155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.720333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.720347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.720640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.720656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.720967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.720982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.721288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.721305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.721641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.721657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.721987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.722008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.722319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.722334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.722656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.722672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.722991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.723017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.723345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.723361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.723561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.723575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.723887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.723901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.724248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.724265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.724584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.724599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.724903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.724919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.725245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.725260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.725593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.725608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.725927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.725941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.726257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.726272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.726498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.726512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.726821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.726836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.727244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.727259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.727543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.727557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.727904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.727918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.728635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.728663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.317 [2024-10-07 14:51:47.728984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.317 [2024-10-07 14:51:47.729017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.317 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.729352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.729368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.729757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.729775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.729971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.729986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.730297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.730312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.730641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.730656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.730968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.730984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.731298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.731312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.731494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.731510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.731841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.731856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.732166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.732181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.732502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.732517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.732854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.732869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.733173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.733189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.733384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.733398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.733684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.733698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.734041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.734056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.734427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.734443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.734755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.734769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.735058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.735072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.735411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.735426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.735750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.735764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.736106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.736121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.736426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.736440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.736773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.736789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.737112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.737127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.737483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.737499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.737680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.737694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.737876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.737891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.738237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.738253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.738549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.738564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.738880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.738894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.739226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.739241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.739572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.739587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.739931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.739948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.740289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.740304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.740523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.740538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.740869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.740884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.741220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.318 [2024-10-07 14:51:47.741234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.318 qpair failed and we were unable to recover it. 00:41:24.318 [2024-10-07 14:51:47.741443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.741458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.741650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.741666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.741853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.741867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.742037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.742054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.742394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.742408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.742623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.742637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.742957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.742971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.743315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.743332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.743674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.743688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.744020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.744036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.744247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.744262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.744568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.744583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.744779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.744793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.744991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.745012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.745201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.745216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.745525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.745539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.745874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.745889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.746101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.746116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.746208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.746223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.746495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.746511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.746784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.746800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.747111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.747129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.747358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.747374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.747697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.747713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.747906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.747921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.748113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.748128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.748338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.748352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.748634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.748649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.748985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.749005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.749196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.749212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.749445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.749460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.749681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.749697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.749870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.749886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.750130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.750145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.750480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.750495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.750678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.750693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.751008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.751025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.751362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.751376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.751707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.751722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.751924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.319 [2024-10-07 14:51:47.751939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.319 qpair failed and we were unable to recover it. 00:41:24.319 [2024-10-07 14:51:47.752239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.752255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.752588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.752603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.752926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.752941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.753264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.753282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.753568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.753582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.753765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.753779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.754049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.754064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.754345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.754359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.754659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.754674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.754887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.754901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.755291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.755306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.755601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.755617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.755958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.755972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.756291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.756308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.756487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.756502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.756824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.756839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.757146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.757161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.757389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.757403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.757735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.757750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.758076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.758093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.758303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.758318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.758654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.758670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.758979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.758994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.759207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.759222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.759535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.759550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.759852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.759866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.760182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.760197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.760556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.760571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.760912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.760927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.761258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.761273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.761602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.761617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.761945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.761961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.762287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.762302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.762629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.762645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.762844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.762861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.763143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.763159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.763453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.763467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.763652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.763666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.763861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.763877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.764060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.320 [2024-10-07 14:51:47.764075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.320 qpair failed and we were unable to recover it. 00:41:24.320 [2024-10-07 14:51:47.764421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.764437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.764799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.764813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.765123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.765139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.765478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.765495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.765820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.765835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.766157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.766172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.766501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.766516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.766838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.766853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.767055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.767070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.767420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.767435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.767763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.767778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.768072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.768088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.768421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.768437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.768724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.768739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.769075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.769091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.769420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.769436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.769758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.769772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.770069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.770084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.770416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.770430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.770637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.770652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.770978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.770993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.771219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.771234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.771544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.771560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.771884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.771900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.772238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.772254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.772581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.772597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.772934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.772950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.773281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.773297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.773482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.773498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.773828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.773843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.774054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.774069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.774444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.774459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.774649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.774663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.774973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.774988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.775211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.775226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.775440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.775454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.775788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.775803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.776141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.776158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.776496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.776511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.776837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.776852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.321 qpair failed and we were unable to recover it. 00:41:24.321 [2024-10-07 14:51:47.777077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.321 [2024-10-07 14:51:47.777091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.777396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.777410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.777775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.777790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.778127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.778143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.778429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.778444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.778772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.778787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.778971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.778987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.779287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.779302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.779622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.779637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.779822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.779837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.780059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.780073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.780396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.780412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.780739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.780755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.781155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.781170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.781487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.781503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.781833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.781848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.782177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.782193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.782414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.782430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.782663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.782678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.783028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.783044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.783371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.783386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.783653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.783667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.783958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.783973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.784296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.784312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.784647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.784663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.784988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.785007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.785356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.785372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.785659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.785675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.785865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.785880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.786115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.786130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.786424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.786441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.786593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.786609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.786846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.786862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.787072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.787087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.787414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.787429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.787759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.787775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.322 qpair failed and we were unable to recover it. 00:41:24.322 [2024-10-07 14:51:47.788113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.322 [2024-10-07 14:51:47.788128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.788427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.788442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.788634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.788650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.788971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.788996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.789297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.789312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.789632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.789648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.789984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.789998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.790347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.790362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.790734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.790748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.791077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.791092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.791425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.791440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.791638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.791653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.791974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.791988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.792206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.792220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.792511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.792526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.792859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.792874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.793072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.793087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.793365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.793381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.793704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.793719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.794036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.794052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.794262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.794278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.794663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.794678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.794891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.794906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.795230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.795247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.795546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.795562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.795933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.795948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.796276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.796291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.796616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.796632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.796956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.796972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.797149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.797165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.797488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.797503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.797870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.797886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.798226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.798241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.798530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.798545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.798883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.798900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.799195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.799210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.799535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.799550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.799905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.799920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.800227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.800242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.800419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.800435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.323 qpair failed and we were unable to recover it. 00:41:24.323 [2024-10-07 14:51:47.800758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.323 [2024-10-07 14:51:47.800773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.801107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.801122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.801304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.801318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.801505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.801520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.801847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.801861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.802083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.802098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.802481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.802496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.802663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.802678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.803031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.803052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.803409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.803425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.803808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.803824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.804163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.804179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.804512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.804527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.804851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.804867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.805197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.805213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.805547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.805563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.805760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.805777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.806080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.806095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.806408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.806423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.806786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.806801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.807127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.807143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.807480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.807496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.807787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.807803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.808090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.808106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.808437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.808453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.808784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.808799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.809006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.809021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.809331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.809347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.809664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.809681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.810010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.810025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.810211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.810226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.810599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.810614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.810794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.810808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.811096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.811112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.811450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.811468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.811774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.811789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.811959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.811973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.812292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.812307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.812617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.812633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.812965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.812981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.324 [2024-10-07 14:51:47.813282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.324 [2024-10-07 14:51:47.813297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.324 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.813632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.813648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.813996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.814016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.814326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.814341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.814667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.814682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.815021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.815036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.815364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.815379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.815692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.815707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.816049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.816066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.816247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.816263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.816562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.816577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.816920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.816935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.817102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.817119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.817444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.817459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.817787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.817803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.818129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.818144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.818479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.818493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.818821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.818836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.819171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.819187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.819471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.819486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.819669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.819684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.820014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.820030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.820361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.820376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.820558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.820574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.820903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.820918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.821254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.821270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.821490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.821505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.821834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.821850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.822171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.822185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.822505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.822521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.822856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.822872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.823051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.823066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.823392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.823407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.823726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.823742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.824070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.824088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.824270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.824285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.824472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.824486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.824756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.824771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.825054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.825069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.825365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.325 [2024-10-07 14:51:47.825380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.325 qpair failed and we were unable to recover it. 00:41:24.325 [2024-10-07 14:51:47.825688] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:24.325 [2024-10-07 14:51:47.825729] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:24.325 [2024-10-07 14:51:47.825741] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:24.326 [2024-10-07 14:51:47.825748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.825754] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:24.326 [2024-10-07 14:51:47.825762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 [2024-10-07 14:51:47.825763] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.826091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.826106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.826307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.826322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.826609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.826624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.826946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.826961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.827296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.827311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.827654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.827673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.828006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.828023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.828055] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:41:24.326 [2024-10-07 14:51:47.828194] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:41:24.326 [2024-10-07 14:51:47.828340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.828354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.828437] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:41:24.326 [2024-10-07 14:51:47.828453] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:41:24.326 [2024-10-07 14:51:47.828684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.828698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.829068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.829083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.829303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.829319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.829642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.829658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.829976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.829992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.830331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.830347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.830681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.830697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.831024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.831039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.831372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.831387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.831600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.831615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.831769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.831783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.832084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.832099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.832420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.832436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.832625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.832639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.832955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.832972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.833329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.833346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.833559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.833574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.833922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.833938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.834313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.834329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.834572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.834586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.834897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.834911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.835143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.835157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.835346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.835362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.835675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.835691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.326 qpair failed and we were unable to recover it. 00:41:24.326 [2024-10-07 14:51:47.835862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.326 [2024-10-07 14:51:47.835877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.835963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.835977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.836197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.836213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.836431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.836446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.836775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.836790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.836981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.836997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.837194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.837209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.837420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.837436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.837760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.837774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.838109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.838124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.838222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.838237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.838589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.838608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.838683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.838699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.838889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.838903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.839120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.839134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.839331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.839345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.839688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.839704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.840033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.840048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.840461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.840476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.840818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.840833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.841177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.841194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.841293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.841307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.841483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.841497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.841689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.841703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.841908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.841924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.842133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.842149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.842490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.842506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.842885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.842899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.843208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.843224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.843407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.843422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.843625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.843640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.843837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.843852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.844176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.844191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.844487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.844502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.844682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.844696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.845105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.845120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.845438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.845454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.845778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.845793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.846083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.846098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.846416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.327 [2024-10-07 14:51:47.846431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.327 qpair failed and we were unable to recover it. 00:41:24.327 [2024-10-07 14:51:47.846620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.846638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.846966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.846982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.847278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.847292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.847626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.847642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.847950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.847964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.848185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.848201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.848522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.848536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.848856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.848871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.849195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.849210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.849533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.849549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.849732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.849748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.850066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.850083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.850163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.850177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.850461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.850476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.850795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.850810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.851140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.851155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.851497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.851512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.851840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.851857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.852044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.852059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.852243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.852258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.852573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.852588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.852774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.852789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.853074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.853089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.853501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.853517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.853821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.853837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.854166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.854183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.854517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.854532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.854868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.854882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.855061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.855076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.855303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.855318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.855608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.855622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.855940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.855955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.856258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.856273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.856608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.856623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.856944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.856958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.857152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.857166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.857498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.857514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.857848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.857862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.858175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.858190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.858512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.328 [2024-10-07 14:51:47.858526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.328 qpair failed and we were unable to recover it. 00:41:24.328 [2024-10-07 14:51:47.858589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.858602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.858899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.858913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.859100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.859117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.859451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.859466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.859654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.859669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.859970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.859984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.860286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.860302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.860480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.860494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.860789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.860804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.861117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.861133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.861464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.861478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.861809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.861827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.862166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.862181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.862476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.862490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.862794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.862809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.863077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.863092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.863298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.863313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.863611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.863625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.863803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.863817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.864151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.864166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.864394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.864408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.864611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.864626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.864801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.864816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.865136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.865151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.865449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.865463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.865804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.865820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.866009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.866025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.866343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.866358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.866681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.866696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.867081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.867096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.867401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.867415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.867715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.867729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.867781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.867793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.868106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.868121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.868454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.868468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.868801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.868815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.869039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.869054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.869248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.869262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.869448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.869465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.329 [2024-10-07 14:51:47.869803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.329 [2024-10-07 14:51:47.869819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.329 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.870027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.870042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.870238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.870252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.870557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.870572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.870771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.870786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.870843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.870856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.871014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.871028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.871184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.871198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.871409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.871424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.871491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.871506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.871819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.871834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.872158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.872174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.872375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.872390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.872676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.872692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.873009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.873025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.873216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.873231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.873543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.873557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.873868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.873883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.874217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.874232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.874556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.874570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.874909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.874923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.875296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.875311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.875635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.875650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.875708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.875722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.876024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.876039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.876108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.876121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.876466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.876481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.876804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.876819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.877159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.877174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.877567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.877583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.877909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.877925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.878251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.878268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.878338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.878352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.878636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.878650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.878831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.878846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.879174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.879190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.330 [2024-10-07 14:51:47.879379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.330 [2024-10-07 14:51:47.879393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.330 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.879708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.879723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.880063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.880078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.880274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.880294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.880584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.880602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.880866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.880882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.881102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.881117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.881452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.881469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.881627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.881642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.881832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.881849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.882031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.882047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.882225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.882240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.882582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.882598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.882903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.882918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.883252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.883267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.883612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.883627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.883814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.883829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.884034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.884050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.884419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.884434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.884776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.884792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.884974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.884989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.885171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.885186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.885501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.885516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.885683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.885698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.886028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.886043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.886236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.886250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.886434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.886449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.886636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.886652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.886930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.886946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.887272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.887289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.887478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.887494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.887684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.887698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.887923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.887937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.888143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.888158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.888506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.888522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.888858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.888874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.889175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.889191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.889363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.889377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.889730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.889745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.890085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.331 [2024-10-07 14:51:47.890102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.331 qpair failed and we were unable to recover it. 00:41:24.331 [2024-10-07 14:51:47.890441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.890456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.890790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.890805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.891044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.891058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.891273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.891291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.891620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.891636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.891836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.891851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.892100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.892117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.892321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.892335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.892660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.892675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.892864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.892878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.893147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.893161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.893227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.893240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.893410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.893423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.893701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.893716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.894018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.894037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.894376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.894393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.894574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.894589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.894921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.894936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.895321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.895336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.895709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.895726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.896033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.896048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.896370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.896385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.896689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.896705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.897047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.897063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.897272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.897287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.897446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.897460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.897754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.897768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.898111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.898128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.898466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.898481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.898666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.898680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.898879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.898894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.899191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.899206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.899531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.899546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.899742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.899756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.899917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.899932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.900165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.900180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.900471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.900485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.900556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.900571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.900738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.900754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.332 [2024-10-07 14:51:47.900939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.332 [2024-10-07 14:51:47.900954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.332 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.901279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.901295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.901621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.901636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.901831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.901845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.902016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.902034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.902273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.902288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.902617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.902632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.902844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.902859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.903172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.903188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.903568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.903583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.903858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.903873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.904175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.904190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.904549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.904564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.904900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.904915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.905128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.905143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.905457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.905473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.905642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.905656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.905852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.905868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.906151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.906167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.906227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.906239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.906538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.906553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.906910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.906925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.907272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.907288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.907465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.907481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.907804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.907818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.908158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.908173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.908520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.908537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.908706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.908721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.909107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.909122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.909461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.909477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.909802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.909818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.910189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.910205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.910417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.910431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.910773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.910788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.911117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.911132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.911348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.911363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.911673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.911688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.911751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.911765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.912040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.912055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.912273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.912288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.333 [2024-10-07 14:51:47.912617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.333 [2024-10-07 14:51:47.912632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.333 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.912803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.912820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.913003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.913018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.913369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.913384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.913726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.913744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.913936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.913951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.914254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.914269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.914610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.914626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.914922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.914937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.915123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.915138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.915454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.915469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.915801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.915816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.915994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.916013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.916337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.916352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.916535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.916550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.916744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.916759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.917099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.917115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.917295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.917310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.917613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.917629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.917692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.917706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.918028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.918043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.918220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.918234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.918547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.918562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.918892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.918908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.919091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.919106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.919335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.919350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.919660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.919674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.920023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.920039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.920365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.920380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.920757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.920774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.921076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.921092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.921304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.921319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.921492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.921506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.921700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.921714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.921885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.921899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.922085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.922101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.922300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.922316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.922532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.922547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.922872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.922888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.923286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.334 [2024-10-07 14:51:47.923302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.334 qpair failed and we were unable to recover it. 00:41:24.334 [2024-10-07 14:51:47.923634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.923648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.923942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.923958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.924131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.924147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.924481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.924497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.924685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.924702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.924897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.924913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.925273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.925288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.925591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.925609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.925946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.925961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.926264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.926279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.926588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.926604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.926789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.926805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.927093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.927108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.927448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.927462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.927633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.927648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.927977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.927993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.928296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.928312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.928514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.928529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.928708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.928724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.929052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.929069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.929404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.929418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.929745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.929761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.929937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.929953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.930292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.930311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.930489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.930504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.930844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.930860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.931262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.931277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.931445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.931460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.931800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.931816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.931975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.931989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.932172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.932186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.932558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.932573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.932953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.932968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.933155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.933171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.933407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.933422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.933725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.335 [2024-10-07 14:51:47.933739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.335 qpair failed and we were unable to recover it. 00:41:24.335 [2024-10-07 14:51:47.934076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.934092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.934423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.934438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.934646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.934661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.934962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.934976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.935266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.935281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.935543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.935559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.935938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.935953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.936262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.936278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.936458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.936476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.936652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.936667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.936865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.936883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.937216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.937233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.937569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.937585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.937913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.937929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.938110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.938126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.938469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.938483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.938812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.938829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.939123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.939139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.939491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.939506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.939686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.939701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.939979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.939994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.940185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.940199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.940537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.940551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.940869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.940885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.941218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.941235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.941571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.941586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.941934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.941950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.942128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.942144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.942494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.942510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.942700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.942716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.943046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.943061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.943354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.943369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.943674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.943690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.944019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.944036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.944362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.944377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.944679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.944695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.944875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.944890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.945239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.945255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.945585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.945600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.336 [2024-10-07 14:51:47.945907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.336 [2024-10-07 14:51:47.945924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.336 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.946144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.946160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.946377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.946391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.946684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.946699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.946894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.946910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.947238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.947255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.947583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.947598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.947932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.947949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.948131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.948146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.948335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.948355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.948406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.948420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.948726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.948742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.948925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.948939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.949143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.949158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.949437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.949452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.949789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.949807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.949971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.949987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.950328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.950343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.950488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.950503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.950800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.950814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.951142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.951157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.951483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.951498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.951671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.951685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.952028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.952044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.952373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.952389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.952735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.952752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.953057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.953073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.953396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.953412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.953578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.953593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.953930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.953946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.954120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.954136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.954468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.954483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.954808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.954825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.955153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.955169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.955367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.955382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.955580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.955596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.955809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.955826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.956018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.956034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.956350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.337 [2024-10-07 14:51:47.956365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.337 qpair failed and we were unable to recover it. 00:41:24.337 [2024-10-07 14:51:47.956702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.956719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.957050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.957065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.957394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.957408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.957754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.957770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.957939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.957954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.958140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.958155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.958492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.958507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.958807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.958822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.959168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.959183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.959373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.959388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.959589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.959608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.959906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.959920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.960253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.960269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.960460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.960475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.960784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.960798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.960998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.961019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.961335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.961351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.961667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.961683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.961875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.961889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.962193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.962208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.962266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.962280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.962560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.962575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.962905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.962920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.963221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.963239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.963572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.963587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.963915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.963931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.964119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.964134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.964436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.964452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.964626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.964641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.964832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.964848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.965179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.965194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.965542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.965556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.965895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.965910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.966256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.966273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.966600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.966616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.966811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.966826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.966999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.967023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.967373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.967388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.967573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.338 [2024-10-07 14:51:47.967587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.338 qpair failed and we were unable to recover it. 00:41:24.338 [2024-10-07 14:51:47.967885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.967901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.968091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.968106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.968279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.968295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.968627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.968642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.968948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.968963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.969297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.969313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.969633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.969649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.969976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.969990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.970341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.970357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.970681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.970696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.971021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.971037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.971344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.971361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.971663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.971677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.972014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.972031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.972361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.972376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.972569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.972584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.972912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.972927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.973256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.973272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.973612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.973629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.973948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.973964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.974303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.974319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.974665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.974684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.975019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.975035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.975412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.975427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.975764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.975779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.976124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.976140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.976211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.976227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.976512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.976526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.976854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.976869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.977052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.977068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.977404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.977419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.977602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.977616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.977940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.977954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.978300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.978315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.978625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.978639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.339 [2024-10-07 14:51:47.978969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.339 [2024-10-07 14:51:47.978983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.339 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.979306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.979323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.979624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.979638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.980059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.980111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.980347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.980366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.980656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.980672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.981012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.981028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.981369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.981385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.981718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.981734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.981921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.981937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0000 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.982026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e700 (9): Bad file descriptor 00:41:24.340 [2024-10-07 14:51:47.982518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.982536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.982710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.982724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.983021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.983037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.983331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.983345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.983510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.983525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.983862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.983877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.984069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.984086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.984405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.984420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.984610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.984624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.984928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.984942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.985126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.985141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.985315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.985330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.985554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.985569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.985898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.985913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.986217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.986234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.986539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.986553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.986612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.986625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.986828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.986843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.987173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.987190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.987526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.987543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.987869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.987885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.988193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.988209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.988567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.988582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.988776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.988791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.989084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.989099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.989441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.989455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.989763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.989777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.990114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.990131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.990456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.340 [2024-10-07 14:51:47.990470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.340 qpair failed and we were unable to recover it. 00:41:24.340 [2024-10-07 14:51:47.990772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.990788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.991105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.991120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.991316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.991330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.991662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.991676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.991850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.991864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.992010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.992025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.992311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.992326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.992611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.992625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.992962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.992977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.993160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.993175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.993409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.993425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.993712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.993728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.994029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.994046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.994381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.994396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.994673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.994688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.995009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.995024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.995337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.995352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.995686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.995700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.995888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.995901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.996223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.996238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.996582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.996597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.341 [2024-10-07 14:51:47.996810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.341 [2024-10-07 14:51:47.996825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.341 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:47.997159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:47.997177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:47.997480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:47.997496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:47.997805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:47.997821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:47.998078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:47.998093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:47.998418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:47.998432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:47.998760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:47.998774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:47.999110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:47.999126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:47.999505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:47.999519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:47.999848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:47.999867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.000189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.000204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.000391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.000405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.000775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.000790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.001093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.001109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.001282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.001296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.001483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.001497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.001933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.002081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003c0080 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.002501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.002550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003c0080 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.002910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.002927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.003254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.003269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.003335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.003349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.003517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.003531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.003737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.003753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.004101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.004117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.004299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.004315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.004499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.004515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.004839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.004855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.005248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.005263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.005495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.005509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.005867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.005882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.006254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.006270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.006440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.006454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.006911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.663 [2024-10-07 14:51:48.007027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003c0080 with addr=10.0.0.2, port=4420 00:41:24.663 qpair failed and we were unable to recover it. 00:41:24.663 [2024-10-07 14:51:48.007434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.007481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003c0080 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.007698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.007714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.008061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.008076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.008308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.008323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.008522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.008537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.008823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.008838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.009194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.009209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.009551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.009566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.009881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.009899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.010073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.010088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.010366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.010381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.010718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.010734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.011068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.011083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.011375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.011390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.011577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.011591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.011767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.011782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.012073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.012091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.012290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.012307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.012626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.012641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.012974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.012989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.013168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.013183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.013496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.013511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.013837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.013852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.014210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.014225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.014445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.014460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.014625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.014640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.014979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.014993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.015368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.015383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.015713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.015728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.016062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.016077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.016463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.016479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.016647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.016662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.016971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.016986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.017184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.017201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.017480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.017494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.017683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.017699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.017871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.017885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.018181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.018196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.018557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.018571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.664 [2024-10-07 14:51:48.018906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.664 [2024-10-07 14:51:48.018922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.664 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.019113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.019128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.019476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.019490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.019783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.019799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.020147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.020162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.020438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.020452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.020807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.020821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.021080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.021095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.021423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.021438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.021780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.021795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.021865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.021879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.022209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.022224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.022580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.022595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.022887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.022903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.023238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.023253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.023561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.023577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.023819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.023836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.024160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.024177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.024516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.024532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.024825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.024840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.025150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.025166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.025343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.025358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.025667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.025682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.025878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.025894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.026095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.026112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.026451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.026466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.026748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.026765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.026948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.026963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.027264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.027279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.027613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.027628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.027937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.027951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.028158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.028174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.028367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.028381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.028709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.028724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.029041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.029056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.029383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.029397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.029721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.029736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.029918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.029932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.030266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.030281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.030481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.665 [2024-10-07 14:51:48.030497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.665 qpair failed and we were unable to recover it. 00:41:24.665 [2024-10-07 14:51:48.030808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.030823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.031146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.031162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.031484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.031499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.031806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.031822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.032134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.032149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.032329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.032343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.032682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.032697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.033023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.033039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.033364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.033379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.033707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.033722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.034064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.034080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.034432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.034446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.034784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.034799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.035141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.035156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.035481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.035495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.035780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.035795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.035973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.035988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.036304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.036322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.036515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.036529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.036889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.036905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.037205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.037222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.037525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.037539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.037857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.037873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.038255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.038271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.038461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.038475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.038760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.038774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.039082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.039097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.039419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.039433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.039761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.039777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.040074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.040088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.040291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.040306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.040636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.040650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.040981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.040995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.041318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.041333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.041668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.041682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.042023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.042038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.042350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.042365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.042547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.042561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.042729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.666 [2024-10-07 14:51:48.042743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.666 qpair failed and we were unable to recover it. 00:41:24.666 [2024-10-07 14:51:48.042948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.042963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.043313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.043328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.043654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.043670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.044006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.044022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.044323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.044337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.044529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.044544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.044828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.044841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.045160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.045174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.045374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.045388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.045587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.045601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.045932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.045947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.046312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.046327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.046654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.046670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.046846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.046862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.047217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.047232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.047568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.047583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.047759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.047773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.048065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.048081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.048415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.048434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.048762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.048777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.049128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.049143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.049440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.049455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.049781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.049795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.049964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.049977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.050169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.050184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.050489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.050503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.050686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.050703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.051031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.051058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.051106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.051119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.051433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.051448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.051795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.051809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.052246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.667 [2024-10-07 14:51:48.052262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.667 qpair failed and we were unable to recover it. 00:41:24.667 [2024-10-07 14:51:48.052587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.052603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.052770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.052786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.053125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.053140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.053506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.053522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.053843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.053857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.054188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.054204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.054391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.054406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.054581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.054597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.054779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.054794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.055123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.055138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.055473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.055488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.055821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.055837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.056170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.056185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.056568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.056583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.056878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.056892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.057073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.057090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.057429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.057443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.057665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.057680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.058010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.058025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.058363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.058378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.058725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.058740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.059074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.059089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.059309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.059323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.059674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.059688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.060059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.060074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.060385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.060400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.060752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.060770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.061074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.061090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.061220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.061235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.061402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.061417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.061754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.061769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.062099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.062113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.062453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.062468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.062803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.062817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.063140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.063157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.063507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.063521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.063835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.063850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.064037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.064052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.064249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.064264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.064477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.668 [2024-10-07 14:51:48.064492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.668 qpair failed and we were unable to recover it. 00:41:24.668 [2024-10-07 14:51:48.064772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.064788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.065169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.065184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.065535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.065550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.065729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.065745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.066140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.066156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.066496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.066511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.066663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.066678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.067018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.067034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.067356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.067371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.067702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.067717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.068021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.068036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.068380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.068395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.068568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.068582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.068927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.068943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.069292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.069308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.069603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.069619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.069949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.069964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.070145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.070160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.070480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.070495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.070815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.070829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.071162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.071178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.071243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.071258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.071459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.071475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.071533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.071547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.071836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.071851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.072178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.072193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.072516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.072534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.072707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.072721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.073012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.073028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.073352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.073367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.073587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.073602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.073935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.073951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.074149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.074163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.074463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.074477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.074794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.074809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.075145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.075161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.075548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.075564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.075889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.075905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.076249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.076264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.669 qpair failed and we were unable to recover it. 00:41:24.669 [2024-10-07 14:51:48.076571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.669 [2024-10-07 14:51:48.076587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.076912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.076926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.077209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.077225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.077525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.077540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.077728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.077741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.078106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.078121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.078436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.078452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.078778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.078793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.078976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.078990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.079322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.079337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.079513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.079527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.079866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.079881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.080195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.080211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.080387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.080401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.080725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.080742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.081049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.081064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.081397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.081413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.081597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.081611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.081919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.081933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.082239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.082254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.082604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.082620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.082795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.082811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.082981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.082995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.083341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.083356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.083548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.083562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.083888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.083904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.084024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.084039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.084402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.084419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.084744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.084760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.085069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.085084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.085298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.085312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.085570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.085584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.085911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.085926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.086141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.086156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.086358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.086373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.086706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.086720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.087059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.087074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.087393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.087408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.087742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.087756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.088090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.088105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.670 [2024-10-07 14:51:48.088398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.670 [2024-10-07 14:51:48.088412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.670 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.088737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.088754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.088938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.088957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.089195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.089210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.089531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.089545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.089877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.089893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.090128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.090144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.090206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.090218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.090316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.090330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.090542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.090556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.090880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.090895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.091220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.091236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.091564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.091578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.091921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.091935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.092255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.092272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.092599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.092614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.092677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.092692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.092757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.092772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.092971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.092987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.093341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.093356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.093543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.093557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.093888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.093905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.094109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.094127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.094459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.094475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.094686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.094702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.095036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.095051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.095263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.095278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.095599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.095614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.095953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.095968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.096286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.096304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.096631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.096647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.096982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.096996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.097181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.097196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.097369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.097383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.097620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.097634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.097953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.097968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.671 qpair failed and we were unable to recover it. 00:41:24.671 [2024-10-07 14:51:48.098256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.671 [2024-10-07 14:51:48.098271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.098580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.098596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.098789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.098803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.099130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.099146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.099322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.099337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.099667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.099683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.100017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.100033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.100361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.100377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.100627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.100643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.100971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.100986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.101165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.101181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.101525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.101540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.101872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.101887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.101947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.101962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.102220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.102235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.102301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.102315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.102589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.102604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.102938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.102954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.103333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.103352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.103527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.103543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.103878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.103893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.104234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.104250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.104589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.104604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.104945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.104960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.105284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.105301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.105646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.105661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.105956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.105971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.106219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.106236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.106531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.106546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.106877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.106892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.107077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.107093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.107399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.107415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.107741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.107758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.107978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.107995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.108317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.108333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.108671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.108686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.109016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.109032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.109359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.109374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.109723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.109739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.110055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.672 [2024-10-07 14:51:48.110070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.672 qpair failed and we were unable to recover it. 00:41:24.672 [2024-10-07 14:51:48.110387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.110401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.110734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.110748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.111072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.111086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.111285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.111299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.111633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.111648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.111981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.111996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.112337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.112353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.112679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.112694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.113028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.113044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.113345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.113360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.113660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.113676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.114004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.114019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.114349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.114365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.114671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.114686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.114990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.115009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.115245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.115259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.115573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.115589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.115926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.115941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.116132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.116149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.116488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.116502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.116836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.116851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.117049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.117064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.117361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.117376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.117560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.117574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.117918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.117933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.118118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.118135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.118441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.118456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.118838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.118853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.119169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.119185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.119488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.119502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.119743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.119759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.119928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.119943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.120275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.120290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.120581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.120596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.120921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.120937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.121244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.121259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.121551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.121566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.121898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.121912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.122286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.122301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.122654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.673 [2024-10-07 14:51:48.122670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.673 qpair failed and we were unable to recover it. 00:41:24.673 [2024-10-07 14:51:48.122968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.122983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.123198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.123212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.123522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.123538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.123918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.123933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.124278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.124294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.124596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.124611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.124944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.124960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.125163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.125179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.125355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.125370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.125555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.125571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.125859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.125875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.126173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.126188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.126501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.126516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.126922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.126937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.127237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.127268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.127603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.127619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.127946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.127961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.128162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.128176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.128483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.128500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.128837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.128851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.129172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.129188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.129358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.129373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.129701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.129716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.130060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.130075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.130361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.130376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.130556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.130572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.130904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.130918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.131234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.131250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.131315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.131329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.131626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.131641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.131942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.131957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.132280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.132296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.132494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.132509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.132818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.132833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.133031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.133047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.133281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.133296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.133611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.133626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.133681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.133696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.134010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.134025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.674 [2024-10-07 14:51:48.134357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.674 [2024-10-07 14:51:48.134372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.674 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.134667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.134683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.134989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.135009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.135357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.135374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.135687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.135702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.135889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.135905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.136162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.136177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.136390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.136404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.136723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.136738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.137066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.137082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.137397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.137411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.137746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.137760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.138092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.138106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.138453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.138467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.138802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.138818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.139013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.139029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.139340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.139354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.139693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.139709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.140046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.140062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.140239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.140255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.140436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.140450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.140770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.140785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.141117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.141133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.141301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.141316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.141502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.141516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.141855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.141869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.142775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.142806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.143062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.143078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.143245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.143260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.143478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.143492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.143819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.143834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.144165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.144181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.144408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.144422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.144754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.144770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.145077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.145092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.145389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.145403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.145587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.145601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.145916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.145930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.146263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.146279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.146611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.146626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.146963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.675 [2024-10-07 14:51:48.146980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.675 qpair failed and we were unable to recover it. 00:41:24.675 [2024-10-07 14:51:48.147307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.147323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.147639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.147654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.147862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.147877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.148225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.148240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.148580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.148595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.148809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.148823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.148903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.148917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.149200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.149215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.149511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.149526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.149699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.149714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.150041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.150056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.150385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.150399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.150737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.150752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.151090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.151105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.151401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.151415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.151599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.151614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.151943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.151958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.152021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.152037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.152317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.152335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.152515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.152529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.152846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.152861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.153039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.153055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.153396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.153411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.153554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.153568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.153855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.153870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.154039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.154054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.154416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.154430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.154759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.154774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.154945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.154960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.155309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.155326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.155508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.155522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.155868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.155884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.156229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.156245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.156583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.676 [2024-10-07 14:51:48.156598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.676 qpair failed and we were unable to recover it. 00:41:24.676 [2024-10-07 14:51:48.156920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.156935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.157254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.157271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.157443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.157457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.157659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.157675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.157946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.157961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.158178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.158193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.158383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.158398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.158694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.158708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.159046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.159061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.159378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.159392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.159726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.159741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.159928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.159942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.160213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.160229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.160557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.160573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.160950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.160965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.161293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.161310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.161638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.161654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.161995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.162016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.162353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.162367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.162559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.162575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.162791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.162805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.163137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.163152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.163483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.163498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.163841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.163856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.164170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.164188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.164530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.164545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.164884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.164899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.165209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.165225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.165421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.165435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.165776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.165791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.166134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.166151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.166335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.166349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.166692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.166707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.166894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.166910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.167237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.167252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.167480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.167495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.167780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.167794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.168005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.168022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.677 qpair failed and we were unable to recover it. 00:41:24.677 [2024-10-07 14:51:48.168319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.677 [2024-10-07 14:51:48.168334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.168518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.168533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.168828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.168842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.169176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.169191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.169523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.169538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.169873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.169889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.170245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.170261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.170588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.170604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.170927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.170944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.171250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.171265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.171447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.171461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.171804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.171821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.172006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.172023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.172115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.172129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.172471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.172486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.172811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.172827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.173016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.173033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.173337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.173354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.173667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.173682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.174012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.174028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.174217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.174230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.174532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.174548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.174894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.174908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.175239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.175254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.175579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.175594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.175923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.175938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.176216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.176233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.176576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.176592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.176772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.176787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.176852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.176866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.177186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.177203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.177497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.177512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.177823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.177839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.178020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.178035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.178334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.178348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.178683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.178698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.178868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.178883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.179199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.179214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.179421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.179436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.179612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.678 [2024-10-07 14:51:48.179627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.678 qpair failed and we were unable to recover it. 00:41:24.678 [2024-10-07 14:51:48.179963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.179979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.180292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.180308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.180496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.180512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.180844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.180860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.181180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.181195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.181506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.181521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.181847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.181861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.182190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.182207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.182536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.182551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.182885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.182902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.183232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.183248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.183577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.183593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.183762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.183777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.184157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.184173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.184360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.184374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.184678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.184702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.185009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.185025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.185395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.185411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.185585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.185600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.185793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.185807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.186138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.186153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.186493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.186509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.186686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.186702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.187028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.187045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.187255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.187269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.187470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.187484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.187784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.187803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.188092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.188107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.188273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.188289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.188621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.188636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.188963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.188978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.189299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.189317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.189649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.189665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.189983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.190007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.190308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.190323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.190663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.190679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.190854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.190867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.191179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.191194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.191571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.191586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.191900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.679 [2024-10-07 14:51:48.191926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.679 qpair failed and we were unable to recover it. 00:41:24.679 [2024-10-07 14:51:48.192268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.192284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.192573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.192588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.192760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.192775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.192967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.192982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.193220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.193236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.193570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.193585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.193765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.193782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.193998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.194027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.194222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.194238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.194419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.194434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.194632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.194647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.194966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.194981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.195311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.195328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.195654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.195670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.195843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.195857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.196142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.196158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.196503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.196518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.196847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.196863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.197174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.197188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.197529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.197543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.197912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.197928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.198259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.198275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.198644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.198660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.198873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.198888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.199084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.199101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.199437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.199452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.199645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.199662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.199972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.199986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.200307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.200323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.200653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.200668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.200963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.200977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.201266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.201281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.201583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.201598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.201974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.201989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.680 [2024-10-07 14:51:48.202319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.680 [2024-10-07 14:51:48.202335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.680 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.202557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.202574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.202765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.202780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.203133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.203149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.203505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.203521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.203831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.203846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.204172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.204187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.204521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.204538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.204881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.204897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.205164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.205179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.205511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.205527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.205872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.205886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.206226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.206241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.206605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.206620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.206947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.206962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.207022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.207036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.207349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.207364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.207705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.207720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.207919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.207933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.208250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.208266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.208443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.208457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.208645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.208659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.208852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.208869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.209233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.209250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.209582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.209599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.209908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.209925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.210107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.210122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.210451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.210465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.210805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.210821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.211120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.211137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.211371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.211385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.211707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.211721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.212055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.212073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.212387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.212402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.212730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.212744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.213085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.213101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.213282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.213297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.213470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.213485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.213804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.213819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.214149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.214165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.681 qpair failed and we were unable to recover it. 00:41:24.681 [2024-10-07 14:51:48.214358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.681 [2024-10-07 14:51:48.214373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.214709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.214732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.215052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.215068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.215388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.215405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.215761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.215776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.216108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.216124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.216364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.216380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.216438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.216452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.216631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.216645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.216858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.216873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.217213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.217228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.217573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.217588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.217920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.217935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.218149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.218165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.218466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.218482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.218675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.218690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.218980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.218995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.219188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.219205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.219499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.219515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.219843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.219859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.220213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.220228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.220537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.220553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.220869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.220884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.221079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.221094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.221423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.221438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.221607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.221621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.221965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.221979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.222308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.222324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.222524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.222540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.222876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.222892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.223117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.223133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.223356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.223370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.223682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.223698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.224026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.224042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.224213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.224228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.224571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.224586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.224919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.224935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.225277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.225293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.225461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.225475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.225753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.682 [2024-10-07 14:51:48.225768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.682 qpair failed and we were unable to recover it. 00:41:24.682 [2024-10-07 14:51:48.225951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.225965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.226289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.226305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.226590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.226604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.226803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.226817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.227019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.227035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.227326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.227340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.227671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.227686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.227872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.227886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.228192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.228208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.228388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.228403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.228744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.228758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.228988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.229008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.229316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.229330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.229635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.229650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.229990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.230009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.230359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.230374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.230754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.230769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.231093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.231108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.231282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.231296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.231665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.231681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.232038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.232054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.232232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.232246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.232428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.232442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.232781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.232796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.233122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.233138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.233456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.233471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.233661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.233675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.233981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.233995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.234263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.234279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.234571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.234586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.234877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.234892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.235218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.235233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.235587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.235604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.235928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.235943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.236146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.236162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.236412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.236426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.236763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.236778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.237085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.683 [2024-10-07 14:51:48.237100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.683 qpair failed and we were unable to recover it. 00:41:24.683 [2024-10-07 14:51:48.237308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.237324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.237606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.237620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.237951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.237965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.238246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.238261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.238457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.238474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.238878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.238893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.239074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.239090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.239437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.239452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.239756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.239771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.240101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.240117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.240444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.240459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.240805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.240820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.241157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.241173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.241549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.241564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.241848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.241864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.241930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.241944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.242154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.242169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.242371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.242385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.242718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.242734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.243069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.243085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.243419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.243433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.243613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.243628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.243932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.243946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.244126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.244141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.244500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.244514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.244843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.244858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.245199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.245214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.245552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.245566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.245879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.245893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.246163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.246178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.246357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.246371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.246557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.246572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.246930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.246945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.247285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.247300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.247630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.247646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.247841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.247856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.248179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.248195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.248537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.248552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 [2024-10-07 14:51:48.248744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.684 [2024-10-07 14:51:48.248759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.684 qpair failed and we were unable to recover it. 00:41:24.684 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:24.684 [2024-10-07 14:51:48.249132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.249149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:41:24.685 [2024-10-07 14:51:48.249454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.249469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.249637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.249651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:24.685 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:24.685 [2024-10-07 14:51:48.249973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.249987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:24.685 [2024-10-07 14:51:48.250306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.250322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.250649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.250664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.250957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.250972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.251283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.251299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.251621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.251636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.251980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.251995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.252306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.252321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.252634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.252650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.252949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.252964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.253297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.253313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.253633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.253648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.253978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.253994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.254309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.254325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.254632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.254648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.254898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.254913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.255155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.255171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.255481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.255498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.255789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.255804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.256136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.256151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.256488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.256503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.256848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.256864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.257049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.257065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.257248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.257263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.257588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.257603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.257941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.257955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.258270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.258287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.258583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.258598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.258767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.258782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.259155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.259170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.259513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.685 [2024-10-07 14:51:48.259531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.685 qpair failed and we were unable to recover it. 00:41:24.685 [2024-10-07 14:51:48.259721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.259737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.259923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.259940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.260325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.260340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.260694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.260709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.261063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.261079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.261408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.261424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.261756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.261770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.262075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.262091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.262287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.262301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.262495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.262509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.262825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.262840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.263173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.263189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.263515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.263530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.263856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.263872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.264067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.264082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.264405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.264419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.264668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.264684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.264857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.264873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.265169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.265184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.265495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.265512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.265811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.265825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.265998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.266019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.266190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.266205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.266531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.266546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.266871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.266886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.267198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.267213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.267571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.267586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.267963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.267978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.268300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.268316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.268668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.268684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.269017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.269033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.269361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.269376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.269429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.269442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.269729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.269744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.270056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.270071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.270390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.270406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.270708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.270723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.271055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.271071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.271409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.271424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.271739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.271757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.686 [2024-10-07 14:51:48.272086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.686 [2024-10-07 14:51:48.272100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.686 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.272465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.272480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.272754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.272768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.273137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.273152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.273466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.273482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.273808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.273823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.274150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.274166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.274399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.274415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.274754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.274770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.275070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.275085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.275285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.275299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.275606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.275620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.275953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.275968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.276172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.276186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.276507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.276522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.276850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.276866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.277203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.277218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.277392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.277407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.277602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.277617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.277817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.277831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.278135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.278150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.278458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.278473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.278769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.278784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.278950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.278965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.279154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.279171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.279466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.279480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.279822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.279837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.280164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.280180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.280474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.280489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.280790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.280807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.281149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.281164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.281339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.281355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.281573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.281588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.281806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.281821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.281984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.282005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.282321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.282336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.282669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.282685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.283024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.283042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.283242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.283256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.283545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.283565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.283770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.687 [2024-10-07 14:51:48.283784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.687 qpair failed and we were unable to recover it. 00:41:24.687 [2024-10-07 14:51:48.283968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.283984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.284283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.284298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.284643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.284657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.284997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.285020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.285211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.285227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.285397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.285411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.285733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.285748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.286062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.286077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.286372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.286387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.286696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.286712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.287038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.287053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.287254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.287268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.287579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.287595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.287932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.287948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.288142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.288157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.288461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.288476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.288655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.288670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.289015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.289030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.289219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.289233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:24.688 [2024-10-07 14:51:48.289573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.289589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:24.688 [2024-10-07 14:51:48.289911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.289928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.688 [2024-10-07 14:51:48.290252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.290268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:24.688 [2024-10-07 14:51:48.290596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.290612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.290984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.291004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.291213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.291227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.291542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.291557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.291874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.291889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.292214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.292229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.292413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.292429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.292592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.292607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.292928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.292943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.293270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.293286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.293584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.293599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.293921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.293936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.293998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.294019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.294327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.294342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.294667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.294682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.294877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.688 [2024-10-07 14:51:48.294892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.688 qpair failed and we were unable to recover it. 00:41:24.688 [2024-10-07 14:51:48.295204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.295219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.295400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.295413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.295593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.295608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.295894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.295908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.296086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.296101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.296334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.296348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.296677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.296691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.297019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.297035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.297215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.297230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.297518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.297533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.297864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.297879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.298223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.298240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.298621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.298637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.298969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.298984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.299326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.299341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.299665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.299680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.300020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.300036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.300212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.300227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.300556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.300571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.300902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.300917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.301238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.301254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.301585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.301599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.301914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.301930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.302289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.302305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.302613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.302628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.302808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.302826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.303132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.303148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.303442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.303456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.303639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.303653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.303862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.303877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.304272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.304288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.304472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.304487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.304809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.304824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.304882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.304895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.305208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.305223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.305414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.689 [2024-10-07 14:51:48.305428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.689 qpair failed and we were unable to recover it. 00:41:24.689 [2024-10-07 14:51:48.305593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.305607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.305783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.305797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.306121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.306136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.306336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.306350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.306688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.306702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.307031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.307048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.307342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.307357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.307664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.307679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.307980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.307995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.308323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.308339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.308694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.308709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.309025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.309040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.309400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.309414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.309766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.309780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.310013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.310028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.310222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.310237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.310478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.310493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.310855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.310870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.311175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.311192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.311522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.311537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.311732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.311746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.312052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.312067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.312362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.312376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.312717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.312731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.313021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.313036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.313359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.313373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.313702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.313718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.314059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.314074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.314447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.314463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.314793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.314810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.315148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.315164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.315354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.315370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.315648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.315662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.316006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.316020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.316361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.316375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.316709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.316725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.317056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.317072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.317300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.690 [2024-10-07 14:51:48.317314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.690 qpair failed and we were unable to recover it. 00:41:24.690 [2024-10-07 14:51:48.317647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.317662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.317995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.318016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.318349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.318369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.318559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.318573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.318775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.318790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.319119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.319134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.319446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.319462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.319776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.319790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.320083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.320098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.320425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.320439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.320770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.320784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.320959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.320973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.321228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.321244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.321568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.321583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.321842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.321856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.322241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.322256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.322558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.322573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.322769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.322783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.323046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.323061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.323376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.323391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.323721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.323736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.323794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.323806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.324020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.324035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.324360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.324375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.324569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.324584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.324869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.324883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.325346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.325361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.325546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.325560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.325882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.325896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.326067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.326081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.326269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.326283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.326571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.326588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.326913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.326929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.327245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.327261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.327417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.327432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.327603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.327618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.327937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.691 [2024-10-07 14:51:48.327952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.691 qpair failed and we were unable to recover it. 00:41:24.691 [2024-10-07 14:51:48.328017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.328033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.328363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.328377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.328683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.328698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.329008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.329024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.329316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.329330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.329511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.329525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.329822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.329836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.330030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.330046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.330385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.330399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.330589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.330604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.330778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.330792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.331170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.331184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.331524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.331538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.331864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.331879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.332082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.332097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.332412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.332426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.332756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.332771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.333109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.333124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.333437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.333451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.333770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.333785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.333976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.333991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.334333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.334350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.334559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.334573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.334847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.334862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.335032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.335047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.335374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.335388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.335578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.335592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.335808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.335822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.335989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.336009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.336313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.336327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.336516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.336530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.336823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.336838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.337109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.337124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.337308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.337322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.337664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.337681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.338008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.338024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.338218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.338234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.338553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.338567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.338623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.338635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.338916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.692 [2024-10-07 14:51:48.338931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.692 qpair failed and we were unable to recover it. 00:41:24.692 [2024-10-07 14:51:48.339130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.339146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.339473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.339487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.339656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.339670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.339847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.339861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.340155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.340170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.340496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.340510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.340703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.340717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.341055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.341071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.341413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.341428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.341765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.341780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.342159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.342176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.342513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.342529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.342833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.342848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.343177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.343192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.343336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.343350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.343531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.343545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.343881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.343895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.344241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.344256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.344600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.344615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.344814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.344830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 Malloc0 00:41:24.693 [2024-10-07 14:51:48.345149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.345164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.345356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.345370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.345707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.345721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.693 [2024-10-07 14:51:48.345821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.345835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.346016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.346031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.346080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.346094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:41:24.693 [2024-10-07 14:51:48.346452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.346467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.693 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:24.693 [2024-10-07 14:51:48.346789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.346804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.347133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.347149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.347384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.347399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.347563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.347578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.347761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.347776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.348098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.693 [2024-10-07 14:51:48.348113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.693 qpair failed and we were unable to recover it. 00:41:24.693 [2024-10-07 14:51:48.348293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.348308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.348389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.348402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.348680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.348695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.349036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.349051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.349219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.349234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.349447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.349463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.349668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.349682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.349957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.349971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.350306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.350322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.350626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.350641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.350982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.350997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.351210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.351224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.351534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.351549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.351731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.351747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.352067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.352083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.352397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.352411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.352475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.352487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.352507] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:24.694 [2024-10-07 14:51:48.352642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.352656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.352995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.353017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.694 [2024-10-07 14:51:48.353349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.694 [2024-10-07 14:51:48.353364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.694 qpair failed and we were unable to recover it. 00:41:24.957 [2024-10-07 14:51:48.353563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.353583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.353747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.353765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.353970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.353984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.354316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.354331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.354664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.354680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.355009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.355024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.355337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.355353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.355691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.355706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.356020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.356035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.356224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.356238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.356569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.356585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.356850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.356864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.357096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.357111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.357509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.357524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.357704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.357718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.358041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.358056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.358247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.358262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.358557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.358572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.358771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.358785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.358951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.358967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.359312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.359327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.359576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.359591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.359909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.359924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.360144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.360159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.360366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.360382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.360702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.360716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.361063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.361078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.361454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.361469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.361645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.361659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.958 [2024-10-07 14:51:48.361944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.361959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:24.958 [2024-10-07 14:51:48.362275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.362290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.958 [2024-10-07 14:51:48.362625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.362640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:24.958 [2024-10-07 14:51:48.362978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.362994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.363336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.363350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.363605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.363619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.363914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.363929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.364251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.364267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.958 qpair failed and we were unable to recover it. 00:41:24.958 [2024-10-07 14:51:48.364414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.958 [2024-10-07 14:51:48.364428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.364756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.364772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.365069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.365084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.365374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.365389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.365569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.365583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.365808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.365822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.366014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.366029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.366376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.366390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.366719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.366734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.366903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.366917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.367094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.367109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.367412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.367427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.367736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.367752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.368079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.368096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.368438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.368453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.368787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.368802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.368997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.369015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.369261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.369276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.369594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.369609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.369982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.369996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.370128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.370146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.370481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.370495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.370827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.370842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.371182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.371197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.371534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.371548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.371871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.371887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.372233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.372249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.372322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.372335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.372493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.372507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.372706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.372722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.372959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.372973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.373278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.373294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.373543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.373557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.373732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.373746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.959 [2024-10-07 14:51:48.373941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.373956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.374202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.374216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:24.959 [2024-10-07 14:51:48.374514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.374529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.959 [2024-10-07 14:51:48.374747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.374762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:24.959 [2024-10-07 14:51:48.375068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.959 [2024-10-07 14:51:48.375083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.959 qpair failed and we were unable to recover it. 00:41:24.959 [2024-10-07 14:51:48.375378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.375392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.375738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.375752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.375925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.375939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.376114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.376130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.376536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.376551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.376927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.376942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.377255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.377274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.377340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.377354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.377540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.377555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.377856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.377871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.378039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.378054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.378372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.378386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.378714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.378728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.379068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.379084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.379263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.379278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.379569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.379584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.379766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.379781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.380120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.380135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.380330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.380344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.380680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.380694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.381023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.381038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.381233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.381248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.381487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.381501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.381723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.381737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.382067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.382083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.382417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.382432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.382765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.382780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.383096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.383112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.383328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.383342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.383518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.383532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.383698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.383713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.384052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.384067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.384401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.384416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.384745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.384760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.384942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.384955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.385192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.385208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.385550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.385564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.385907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.385922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f10 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.960 0 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.386125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.386139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.960 [2024-10-07 14:51:48.386334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.960 [2024-10-07 14:51:48.386349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.960 qpair failed and we were unable to recover it. 00:41:24.961 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:24.961 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.961 [2024-10-07 14:51:48.386693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.386708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:24.961 [2024-10-07 14:51:48.387029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.387045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.387348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.387362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.387551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.387567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.387651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.387667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.387955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.387969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.388276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.388293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.388633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.388648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.388980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.388995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.389209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.389224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.389547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.389561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.389935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.389954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.390235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.390251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.390448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.390464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.390793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.390808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.391136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.391151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.391297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.391311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.391491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.391505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.391836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.391850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.392221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.392237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.392560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.392574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.392900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:41:24.961 [2024-10-07 14:51:48.392914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500039f100 with addr=10.0.0.2, port=4420 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.393395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:24.961 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.961 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:24.961 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:24.961 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:24.961 [2024-10-07 14:51:48.403995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.961 [2024-10-07 14:51:48.404102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.961 [2024-10-07 14:51:48.404130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.961 [2024-10-07 14:51:48.404146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.961 [2024-10-07 14:51:48.404158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.961 [2024-10-07 14:51:48.404190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:24.961 14:51:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3307899 00:41:24.961 [2024-10-07 14:51:48.413879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.961 [2024-10-07 14:51:48.414023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.961 [2024-10-07 14:51:48.414046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.961 [2024-10-07 14:51:48.414059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.961 [2024-10-07 14:51:48.414069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.961 [2024-10-07 14:51:48.414094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.423742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.961 [2024-10-07 14:51:48.423827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.961 [2024-10-07 14:51:48.423849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.961 [2024-10-07 14:51:48.423861] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.961 [2024-10-07 14:51:48.423870] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.961 [2024-10-07 14:51:48.423892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.433826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.961 [2024-10-07 14:51:48.433910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.961 [2024-10-07 14:51:48.433931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.961 [2024-10-07 14:51:48.433945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.961 [2024-10-07 14:51:48.433955] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.961 [2024-10-07 14:51:48.433978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.961 [2024-10-07 14:51:48.443906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.961 [2024-10-07 14:51:48.443991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.961 [2024-10-07 14:51:48.444022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.961 [2024-10-07 14:51:48.444034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.961 [2024-10-07 14:51:48.444045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.961 [2024-10-07 14:51:48.444068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.961 qpair failed and we were unable to recover it. 00:41:24.962 [2024-10-07 14:51:48.453911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.962 [2024-10-07 14:51:48.453992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.962 [2024-10-07 14:51:48.454021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.962 [2024-10-07 14:51:48.454034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.962 [2024-10-07 14:51:48.454044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.962 [2024-10-07 14:51:48.454066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.962 qpair failed and we were unable to recover it. 00:41:24.962 [2024-10-07 14:51:48.463870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.962 [2024-10-07 14:51:48.463950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.962 [2024-10-07 14:51:48.463972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.962 [2024-10-07 14:51:48.463989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.962 [2024-10-07 14:51:48.463998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.962 [2024-10-07 14:51:48.464027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.962 qpair failed and we were unable to recover it. 00:41:24.962 [2024-10-07 14:51:48.473947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.962 [2024-10-07 14:51:48.474033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.962 [2024-10-07 14:51:48.474055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.962 [2024-10-07 14:51:48.474067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.962 [2024-10-07 14:51:48.474076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.962 [2024-10-07 14:51:48.474098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.962 qpair failed and we were unable to recover it. 00:41:24.962 [2024-10-07 14:51:48.483934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.962 [2024-10-07 14:51:48.484021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.962 [2024-10-07 14:51:48.484043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.962 [2024-10-07 14:51:48.484055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.962 [2024-10-07 14:51:48.484064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.962 [2024-10-07 14:51:48.484086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.962 qpair failed and we were unable to recover it. 00:41:24.962 [2024-10-07 14:51:48.493975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.962 [2024-10-07 14:51:48.494125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.962 [2024-10-07 14:51:48.494148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.962 [2024-10-07 14:51:48.494160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.962 [2024-10-07 14:51:48.494169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.962 [2024-10-07 14:51:48.494191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.962 qpair failed and we were unable to recover it. 00:41:24.962 [2024-10-07 14:51:48.503998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.962 [2024-10-07 14:51:48.504077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.962 [2024-10-07 14:51:48.504099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.962 [2024-10-07 14:51:48.504111] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.962 [2024-10-07 14:51:48.504122] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.962 [2024-10-07 14:51:48.504143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.962 qpair failed and we were unable to recover it. 00:41:24.962 [2024-10-07 14:51:48.514016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.962 [2024-10-07 14:51:48.514097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.962 [2024-10-07 14:51:48.514119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.962 [2024-10-07 14:51:48.514131] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.962 [2024-10-07 14:51:48.514140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.962 [2024-10-07 14:51:48.514162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.962 qpair failed and we were unable to recover it. 00:41:24.962 [2024-10-07 14:51:48.524048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.962 [2024-10-07 14:51:48.524137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.962 [2024-10-07 14:51:48.524162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.962 [2024-10-07 14:51:48.524174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.962 [2024-10-07 14:51:48.524184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.962 [2024-10-07 14:51:48.524209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.962 qpair failed and we were unable to recover it. 00:41:24.962 [2024-10-07 14:51:48.534084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.962 [2024-10-07 14:51:48.534169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.962 [2024-10-07 14:51:48.534190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.962 [2024-10-07 14:51:48.534202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.962 [2024-10-07 14:51:48.534211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.962 [2024-10-07 14:51:48.534233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.962 qpair failed and we were unable to recover it. 00:41:24.962 [2024-10-07 14:51:48.544065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.962 [2024-10-07 14:51:48.544139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.962 [2024-10-07 14:51:48.544160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.962 [2024-10-07 14:51:48.544172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.962 [2024-10-07 14:51:48.544181] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.962 [2024-10-07 14:51:48.544202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.962 qpair failed and we were unable to recover it. 00:41:24.962 [2024-10-07 14:51:48.554127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.962 [2024-10-07 14:51:48.554201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.962 [2024-10-07 14:51:48.554226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.962 [2024-10-07 14:51:48.554238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.962 [2024-10-07 14:51:48.554247] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.962 [2024-10-07 14:51:48.554269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.962 qpair failed and we were unable to recover it. 00:41:24.962 [2024-10-07 14:51:48.564132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.962 [2024-10-07 14:51:48.564210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.962 [2024-10-07 14:51:48.564231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.962 [2024-10-07 14:51:48.564248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.962 [2024-10-07 14:51:48.564258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.962 [2024-10-07 14:51:48.564280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.962 qpair failed and we were unable to recover it. 00:41:24.962 [2024-10-07 14:51:48.574217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.963 [2024-10-07 14:51:48.574294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.963 [2024-10-07 14:51:48.574315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.963 [2024-10-07 14:51:48.574327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.963 [2024-10-07 14:51:48.574336] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.963 [2024-10-07 14:51:48.574357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.963 qpair failed and we were unable to recover it. 00:41:24.963 [2024-10-07 14:51:48.584187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.963 [2024-10-07 14:51:48.584261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.963 [2024-10-07 14:51:48.584282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.963 [2024-10-07 14:51:48.584293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.963 [2024-10-07 14:51:48.584303] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.963 [2024-10-07 14:51:48.584325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.963 qpair failed and we were unable to recover it. 00:41:24.963 [2024-10-07 14:51:48.594243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.963 [2024-10-07 14:51:48.594317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.963 [2024-10-07 14:51:48.594338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.963 [2024-10-07 14:51:48.594350] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.963 [2024-10-07 14:51:48.594360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.963 [2024-10-07 14:51:48.594387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.963 qpair failed and we were unable to recover it. 00:41:24.963 [2024-10-07 14:51:48.604280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.963 [2024-10-07 14:51:48.604354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.963 [2024-10-07 14:51:48.604375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.963 [2024-10-07 14:51:48.604386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.963 [2024-10-07 14:51:48.604396] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.963 [2024-10-07 14:51:48.604417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.963 qpair failed and we were unable to recover it. 00:41:24.963 [2024-10-07 14:51:48.614278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.963 [2024-10-07 14:51:48.614362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.963 [2024-10-07 14:51:48.614384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.963 [2024-10-07 14:51:48.614395] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.963 [2024-10-07 14:51:48.614404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.963 [2024-10-07 14:51:48.614425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.963 qpair failed and we were unable to recover it. 00:41:24.963 [2024-10-07 14:51:48.624310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.963 [2024-10-07 14:51:48.624385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.963 [2024-10-07 14:51:48.624406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.963 [2024-10-07 14:51:48.624417] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.963 [2024-10-07 14:51:48.624427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.963 [2024-10-07 14:51:48.624449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.963 qpair failed and we were unable to recover it. 00:41:24.963 [2024-10-07 14:51:48.634311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.963 [2024-10-07 14:51:48.634402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.963 [2024-10-07 14:51:48.634423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.963 [2024-10-07 14:51:48.634435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.963 [2024-10-07 14:51:48.634445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.963 [2024-10-07 14:51:48.634466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.963 qpair failed and we were unable to recover it. 00:41:24.963 [2024-10-07 14:51:48.644341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.963 [2024-10-07 14:51:48.644427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.963 [2024-10-07 14:51:48.644451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.963 [2024-10-07 14:51:48.644463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.963 [2024-10-07 14:51:48.644472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.963 [2024-10-07 14:51:48.644494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.963 qpair failed and we were unable to recover it. 00:41:24.963 [2024-10-07 14:51:48.654408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:24.963 [2024-10-07 14:51:48.654487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:24.963 [2024-10-07 14:51:48.654509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:24.963 [2024-10-07 14:51:48.654520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:24.963 [2024-10-07 14:51:48.654530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:24.963 [2024-10-07 14:51:48.654550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:24.963 qpair failed and we were unable to recover it. 00:41:25.224 [2024-10-07 14:51:48.664421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.224 [2024-10-07 14:51:48.664498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.224 [2024-10-07 14:51:48.664520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.224 [2024-10-07 14:51:48.664531] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.224 [2024-10-07 14:51:48.664540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.224 [2024-10-07 14:51:48.664562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.224 qpair failed and we were unable to recover it. 00:41:25.224 [2024-10-07 14:51:48.674434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.224 [2024-10-07 14:51:48.674523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.224 [2024-10-07 14:51:48.674544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.224 [2024-10-07 14:51:48.674556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.225 [2024-10-07 14:51:48.674567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.225 [2024-10-07 14:51:48.674588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.225 qpair failed and we were unable to recover it. 00:41:25.225 [2024-10-07 14:51:48.684438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.225 [2024-10-07 14:51:48.684516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.225 [2024-10-07 14:51:48.684537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.225 [2024-10-07 14:51:48.684549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.225 [2024-10-07 14:51:48.684558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.225 [2024-10-07 14:51:48.684583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.225 qpair failed and we were unable to recover it. 00:41:25.225 [2024-10-07 14:51:48.694476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.225 [2024-10-07 14:51:48.694557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.225 [2024-10-07 14:51:48.694578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.225 [2024-10-07 14:51:48.694590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.225 [2024-10-07 14:51:48.694599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.225 [2024-10-07 14:51:48.694621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.225 qpair failed and we were unable to recover it. 00:41:25.225 [2024-10-07 14:51:48.704551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.225 [2024-10-07 14:51:48.704626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.225 [2024-10-07 14:51:48.704647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.225 [2024-10-07 14:51:48.704658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.225 [2024-10-07 14:51:48.704668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.225 [2024-10-07 14:51:48.704690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.225 qpair failed and we were unable to recover it. 00:41:25.225 [2024-10-07 14:51:48.714584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.225 [2024-10-07 14:51:48.714674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.225 [2024-10-07 14:51:48.714696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.225 [2024-10-07 14:51:48.714708] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.225 [2024-10-07 14:51:48.714718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.225 [2024-10-07 14:51:48.714739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.225 qpair failed and we were unable to recover it. 00:41:25.225 [2024-10-07 14:51:48.724601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.225 [2024-10-07 14:51:48.724689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.225 [2024-10-07 14:51:48.724711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.225 [2024-10-07 14:51:48.724723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.225 [2024-10-07 14:51:48.724732] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.225 [2024-10-07 14:51:48.724753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.225 qpair failed and we were unable to recover it. 00:41:25.225 [2024-10-07 14:51:48.734563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.225 [2024-10-07 14:51:48.734682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.225 [2024-10-07 14:51:48.734707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.225 [2024-10-07 14:51:48.734719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.225 [2024-10-07 14:51:48.734728] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.225 [2024-10-07 14:51:48.734749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.225 qpair failed and we were unable to recover it. 00:41:25.225 [2024-10-07 14:51:48.744644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.225 [2024-10-07 14:51:48.744727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.225 [2024-10-07 14:51:48.744759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.225 [2024-10-07 14:51:48.744773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.225 [2024-10-07 14:51:48.744783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.225 [2024-10-07 14:51:48.744811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.225 qpair failed and we were unable to recover it. 00:41:25.225 [2024-10-07 14:51:48.754654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.225 [2024-10-07 14:51:48.754739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.225 [2024-10-07 14:51:48.754771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.225 [2024-10-07 14:51:48.754786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.225 [2024-10-07 14:51:48.754796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.225 [2024-10-07 14:51:48.754823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.225 qpair failed and we were unable to recover it. 00:41:25.225 [2024-10-07 14:51:48.764659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.225 [2024-10-07 14:51:48.764786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.225 [2024-10-07 14:51:48.764818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.225 [2024-10-07 14:51:48.764832] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.225 [2024-10-07 14:51:48.764843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.225 [2024-10-07 14:51:48.764870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.225 qpair failed and we were unable to recover it. 00:41:25.225 [2024-10-07 14:51:48.774721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.225 [2024-10-07 14:51:48.774797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.225 [2024-10-07 14:51:48.774820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.225 [2024-10-07 14:51:48.774833] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.225 [2024-10-07 14:51:48.774847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.225 [2024-10-07 14:51:48.774871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.225 qpair failed and we were unable to recover it. 00:41:25.225 [2024-10-07 14:51:48.784738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.225 [2024-10-07 14:51:48.784812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.225 [2024-10-07 14:51:48.784834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.225 [2024-10-07 14:51:48.784845] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.225 [2024-10-07 14:51:48.784856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.225 [2024-10-07 14:51:48.784878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.225 qpair failed and we were unable to recover it. 00:41:25.225 [2024-10-07 14:51:48.794778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.225 [2024-10-07 14:51:48.794858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.225 [2024-10-07 14:51:48.794879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.225 [2024-10-07 14:51:48.794890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.225 [2024-10-07 14:51:48.794900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.225 [2024-10-07 14:51:48.794922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.225 qpair failed and we were unable to recover it. 00:41:25.225 [2024-10-07 14:51:48.804845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.225 [2024-10-07 14:51:48.804926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.225 [2024-10-07 14:51:48.804947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.225 [2024-10-07 14:51:48.804960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.225 [2024-10-07 14:51:48.804969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.225 [2024-10-07 14:51:48.804991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.225 qpair failed and we were unable to recover it. 00:41:25.225 [2024-10-07 14:51:48.814772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.225 [2024-10-07 14:51:48.814845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.225 [2024-10-07 14:51:48.814867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.225 [2024-10-07 14:51:48.814878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.225 [2024-10-07 14:51:48.814888] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.226 [2024-10-07 14:51:48.814909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.226 qpair failed and we were unable to recover it. 00:41:25.226 [2024-10-07 14:51:48.824882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.226 [2024-10-07 14:51:48.824955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.226 [2024-10-07 14:51:48.824977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.226 [2024-10-07 14:51:48.824989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.226 [2024-10-07 14:51:48.824998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.226 [2024-10-07 14:51:48.825027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.226 qpair failed and we were unable to recover it. 00:41:25.226 [2024-10-07 14:51:48.834865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.226 [2024-10-07 14:51:48.834940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.226 [2024-10-07 14:51:48.834961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.226 [2024-10-07 14:51:48.834973] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.226 [2024-10-07 14:51:48.834982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.226 [2024-10-07 14:51:48.835010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.226 qpair failed and we were unable to recover it. 00:41:25.226 [2024-10-07 14:51:48.844874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.226 [2024-10-07 14:51:48.844946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.226 [2024-10-07 14:51:48.844968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.226 [2024-10-07 14:51:48.844979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.226 [2024-10-07 14:51:48.844989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.226 [2024-10-07 14:51:48.845016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.226 qpair failed and we were unable to recover it. 00:41:25.226 [2024-10-07 14:51:48.854954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.226 [2024-10-07 14:51:48.855034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.226 [2024-10-07 14:51:48.855056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.226 [2024-10-07 14:51:48.855069] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.226 [2024-10-07 14:51:48.855078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.226 [2024-10-07 14:51:48.855104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.226 qpair failed and we were unable to recover it. 00:41:25.226 [2024-10-07 14:51:48.864963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.226 [2024-10-07 14:51:48.865049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.226 [2024-10-07 14:51:48.865072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.226 [2024-10-07 14:51:48.865093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.226 [2024-10-07 14:51:48.865103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.226 [2024-10-07 14:51:48.865125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.226 qpair failed and we were unable to recover it. 00:41:25.226 [2024-10-07 14:51:48.874986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.226 [2024-10-07 14:51:48.875088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.226 [2024-10-07 14:51:48.875111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.226 [2024-10-07 14:51:48.875123] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.226 [2024-10-07 14:51:48.875132] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.226 [2024-10-07 14:51:48.875154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.226 qpair failed and we were unable to recover it. 00:41:25.226 [2024-10-07 14:51:48.884992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.226 [2024-10-07 14:51:48.885070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.226 [2024-10-07 14:51:48.885091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.226 [2024-10-07 14:51:48.885102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.226 [2024-10-07 14:51:48.885112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.226 [2024-10-07 14:51:48.885135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.226 qpair failed and we were unable to recover it. 00:41:25.226 [2024-10-07 14:51:48.895026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.226 [2024-10-07 14:51:48.895104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.226 [2024-10-07 14:51:48.895125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.226 [2024-10-07 14:51:48.895137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.226 [2024-10-07 14:51:48.895147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.226 [2024-10-07 14:51:48.895168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.226 qpair failed and we were unable to recover it. 00:41:25.226 [2024-10-07 14:51:48.905057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.226 [2024-10-07 14:51:48.905136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.226 [2024-10-07 14:51:48.905158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.226 [2024-10-07 14:51:48.905170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.226 [2024-10-07 14:51:48.905179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.226 [2024-10-07 14:51:48.905201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.226 qpair failed and we were unable to recover it. 00:41:25.226 [2024-10-07 14:51:48.915042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.226 [2024-10-07 14:51:48.915119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.226 [2024-10-07 14:51:48.915140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.226 [2024-10-07 14:51:48.915152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.226 [2024-10-07 14:51:48.915162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.226 [2024-10-07 14:51:48.915183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.226 qpair failed and we were unable to recover it. 00:41:25.226 [2024-10-07 14:51:48.925110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.226 [2024-10-07 14:51:48.925190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.226 [2024-10-07 14:51:48.925212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.226 [2024-10-07 14:51:48.925225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.226 [2024-10-07 14:51:48.925235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.226 [2024-10-07 14:51:48.925256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.226 qpair failed and we were unable to recover it. 00:41:25.488 [2024-10-07 14:51:48.935056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.488 [2024-10-07 14:51:48.935148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.488 [2024-10-07 14:51:48.935170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.488 [2024-10-07 14:51:48.935181] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.488 [2024-10-07 14:51:48.935191] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.488 [2024-10-07 14:51:48.935212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.488 qpair failed and we were unable to recover it. 00:41:25.488 [2024-10-07 14:51:48.945158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.488 [2024-10-07 14:51:48.945235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.488 [2024-10-07 14:51:48.945256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.488 [2024-10-07 14:51:48.945267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.488 [2024-10-07 14:51:48.945277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.488 [2024-10-07 14:51:48.945298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.488 qpair failed and we were unable to recover it. 00:41:25.488 [2024-10-07 14:51:48.955226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.488 [2024-10-07 14:51:48.955304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.488 [2024-10-07 14:51:48.955326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.488 [2024-10-07 14:51:48.955341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.488 [2024-10-07 14:51:48.955350] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.488 [2024-10-07 14:51:48.955371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.488 qpair failed and we were unable to recover it. 00:41:25.488 [2024-10-07 14:51:48.965203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.488 [2024-10-07 14:51:48.965277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.488 [2024-10-07 14:51:48.965298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.488 [2024-10-07 14:51:48.965309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.488 [2024-10-07 14:51:48.965319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.488 [2024-10-07 14:51:48.965339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.488 qpair failed and we were unable to recover it. 00:41:25.488 [2024-10-07 14:51:48.975182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.488 [2024-10-07 14:51:48.975257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.489 [2024-10-07 14:51:48.975278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.489 [2024-10-07 14:51:48.975290] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.489 [2024-10-07 14:51:48.975299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.489 [2024-10-07 14:51:48.975320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.489 qpair failed and we were unable to recover it. 00:41:25.489 [2024-10-07 14:51:48.985308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.489 [2024-10-07 14:51:48.985383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.489 [2024-10-07 14:51:48.985404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.489 [2024-10-07 14:51:48.985415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.489 [2024-10-07 14:51:48.985425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.489 [2024-10-07 14:51:48.985445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.489 qpair failed and we were unable to recover it. 00:41:25.489 [2024-10-07 14:51:48.995297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.489 [2024-10-07 14:51:48.995408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.489 [2024-10-07 14:51:48.995432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.489 [2024-10-07 14:51:48.995445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.489 [2024-10-07 14:51:48.995455] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.489 [2024-10-07 14:51:48.995477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.489 qpair failed and we were unable to recover it. 00:41:25.489 [2024-10-07 14:51:49.005317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.489 [2024-10-07 14:51:49.005390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.489 [2024-10-07 14:51:49.005411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.489 [2024-10-07 14:51:49.005423] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.489 [2024-10-07 14:51:49.005432] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.489 [2024-10-07 14:51:49.005454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.489 qpair failed and we were unable to recover it. 00:41:25.489 [2024-10-07 14:51:49.015309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.489 [2024-10-07 14:51:49.015391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.489 [2024-10-07 14:51:49.015412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.489 [2024-10-07 14:51:49.015424] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.489 [2024-10-07 14:51:49.015433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.489 [2024-10-07 14:51:49.015454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.489 qpair failed and we were unable to recover it. 00:41:25.489 [2024-10-07 14:51:49.025385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.489 [2024-10-07 14:51:49.025461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.489 [2024-10-07 14:51:49.025482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.489 [2024-10-07 14:51:49.025494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.489 [2024-10-07 14:51:49.025504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.489 [2024-10-07 14:51:49.025526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.489 qpair failed and we were unable to recover it. 00:41:25.489 [2024-10-07 14:51:49.035468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.489 [2024-10-07 14:51:49.035557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.489 [2024-10-07 14:51:49.035578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.489 [2024-10-07 14:51:49.035591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.489 [2024-10-07 14:51:49.035600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.489 [2024-10-07 14:51:49.035621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.489 qpair failed and we were unable to recover it. 00:41:25.489 [2024-10-07 14:51:49.045389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.489 [2024-10-07 14:51:49.045465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.489 [2024-10-07 14:51:49.045489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.489 [2024-10-07 14:51:49.045501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.489 [2024-10-07 14:51:49.045511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.489 [2024-10-07 14:51:49.045532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.489 qpair failed and we were unable to recover it. 00:41:25.489 [2024-10-07 14:51:49.055397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.489 [2024-10-07 14:51:49.055475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.489 [2024-10-07 14:51:49.055496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.489 [2024-10-07 14:51:49.055508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.489 [2024-10-07 14:51:49.055517] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.489 [2024-10-07 14:51:49.055538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.489 qpair failed and we were unable to recover it. 00:41:25.489 [2024-10-07 14:51:49.065548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.489 [2024-10-07 14:51:49.065618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.489 [2024-10-07 14:51:49.065639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.489 [2024-10-07 14:51:49.065650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.489 [2024-10-07 14:51:49.065659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.489 [2024-10-07 14:51:49.065680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.489 qpair failed and we were unable to recover it. 00:41:25.489 [2024-10-07 14:51:49.075531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.489 [2024-10-07 14:51:49.075611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.489 [2024-10-07 14:51:49.075632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.489 [2024-10-07 14:51:49.075643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.489 [2024-10-07 14:51:49.075661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.489 [2024-10-07 14:51:49.075683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.489 qpair failed and we were unable to recover it. 00:41:25.489 [2024-10-07 14:51:49.085551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.489 [2024-10-07 14:51:49.085633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.489 [2024-10-07 14:51:49.085654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.489 [2024-10-07 14:51:49.085666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.489 [2024-10-07 14:51:49.085675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.489 [2024-10-07 14:51:49.085700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.489 qpair failed and we were unable to recover it. 00:41:25.489 [2024-10-07 14:51:49.095605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.489 [2024-10-07 14:51:49.095680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.489 [2024-10-07 14:51:49.095701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.489 [2024-10-07 14:51:49.095713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.489 [2024-10-07 14:51:49.095722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.489 [2024-10-07 14:51:49.095744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.489 qpair failed and we were unable to recover it. 00:41:25.489 [2024-10-07 14:51:49.105620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.489 [2024-10-07 14:51:49.105698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.489 [2024-10-07 14:51:49.105719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.489 [2024-10-07 14:51:49.105731] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.489 [2024-10-07 14:51:49.105740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.489 [2024-10-07 14:51:49.105761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.489 qpair failed and we were unable to recover it. 00:41:25.489 [2024-10-07 14:51:49.115661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.489 [2024-10-07 14:51:49.115735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.490 [2024-10-07 14:51:49.115757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.490 [2024-10-07 14:51:49.115768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.490 [2024-10-07 14:51:49.115778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.490 [2024-10-07 14:51:49.115799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.490 qpair failed and we were unable to recover it. 00:41:25.490 [2024-10-07 14:51:49.125585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.490 [2024-10-07 14:51:49.125655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.490 [2024-10-07 14:51:49.125677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.490 [2024-10-07 14:51:49.125688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.490 [2024-10-07 14:51:49.125697] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.490 [2024-10-07 14:51:49.125721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.490 qpair failed and we were unable to recover it. 00:41:25.490 [2024-10-07 14:51:49.135681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.490 [2024-10-07 14:51:49.135752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.490 [2024-10-07 14:51:49.135776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.490 [2024-10-07 14:51:49.135788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.490 [2024-10-07 14:51:49.135798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.490 [2024-10-07 14:51:49.135819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.490 qpair failed and we were unable to recover it. 00:41:25.490 [2024-10-07 14:51:49.145732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.490 [2024-10-07 14:51:49.145801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.490 [2024-10-07 14:51:49.145822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.490 [2024-10-07 14:51:49.145834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.490 [2024-10-07 14:51:49.145843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.490 [2024-10-07 14:51:49.145865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.490 qpair failed and we were unable to recover it. 00:41:25.490 [2024-10-07 14:51:49.155764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.490 [2024-10-07 14:51:49.155838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.490 [2024-10-07 14:51:49.155859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.490 [2024-10-07 14:51:49.155871] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.490 [2024-10-07 14:51:49.155881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.490 [2024-10-07 14:51:49.155902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.490 qpair failed and we were unable to recover it. 00:41:25.490 [2024-10-07 14:51:49.165807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.490 [2024-10-07 14:51:49.165879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.490 [2024-10-07 14:51:49.165900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.490 [2024-10-07 14:51:49.165912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.490 [2024-10-07 14:51:49.165921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.490 [2024-10-07 14:51:49.165943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.490 qpair failed and we were unable to recover it. 00:41:25.490 [2024-10-07 14:51:49.175854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.490 [2024-10-07 14:51:49.175940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.490 [2024-10-07 14:51:49.175961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.490 [2024-10-07 14:51:49.175974] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.490 [2024-10-07 14:51:49.175984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.490 [2024-10-07 14:51:49.176015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.490 qpair failed and we were unable to recover it. 00:41:25.490 [2024-10-07 14:51:49.185776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.490 [2024-10-07 14:51:49.185852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.490 [2024-10-07 14:51:49.185873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.490 [2024-10-07 14:51:49.185885] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.490 [2024-10-07 14:51:49.185894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.490 [2024-10-07 14:51:49.185934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.490 qpair failed and we were unable to recover it. 00:41:25.752 [2024-10-07 14:51:49.195958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.752 [2024-10-07 14:51:49.196058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.752 [2024-10-07 14:51:49.196080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.752 [2024-10-07 14:51:49.196092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.752 [2024-10-07 14:51:49.196102] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.752 [2024-10-07 14:51:49.196124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.752 qpair failed and we were unable to recover it. 00:41:25.752 [2024-10-07 14:51:49.205891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.752 [2024-10-07 14:51:49.205962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.753 [2024-10-07 14:51:49.205983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.753 [2024-10-07 14:51:49.205994] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.753 [2024-10-07 14:51:49.206009] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.753 [2024-10-07 14:51:49.206031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.753 qpair failed and we were unable to recover it. 00:41:25.753 [2024-10-07 14:51:49.215904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.753 [2024-10-07 14:51:49.215980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.753 [2024-10-07 14:51:49.216008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.753 [2024-10-07 14:51:49.216020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.753 [2024-10-07 14:51:49.216029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.753 [2024-10-07 14:51:49.216052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.753 qpair failed and we were unable to recover it. 00:41:25.753 [2024-10-07 14:51:49.225980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.753 [2024-10-07 14:51:49.226054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.753 [2024-10-07 14:51:49.226078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.753 [2024-10-07 14:51:49.226090] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.753 [2024-10-07 14:51:49.226099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.753 [2024-10-07 14:51:49.226122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.753 qpair failed and we were unable to recover it. 00:41:25.753 [2024-10-07 14:51:49.235975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.753 [2024-10-07 14:51:49.236061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.753 [2024-10-07 14:51:49.236083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.753 [2024-10-07 14:51:49.236096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.753 [2024-10-07 14:51:49.236105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.753 [2024-10-07 14:51:49.236127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.753 qpair failed and we were unable to recover it. 00:41:25.753 [2024-10-07 14:51:49.245934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.753 [2024-10-07 14:51:49.246015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.753 [2024-10-07 14:51:49.246037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.753 [2024-10-07 14:51:49.246048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.753 [2024-10-07 14:51:49.246058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.753 [2024-10-07 14:51:49.246079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.753 qpair failed and we were unable to recover it. 00:41:25.753 [2024-10-07 14:51:49.256066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.753 [2024-10-07 14:51:49.256143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.753 [2024-10-07 14:51:49.256165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.753 [2024-10-07 14:51:49.256177] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.753 [2024-10-07 14:51:49.256186] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.753 [2024-10-07 14:51:49.256209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.753 qpair failed and we were unable to recover it. 00:41:25.753 [2024-10-07 14:51:49.266056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.753 [2024-10-07 14:51:49.266132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.753 [2024-10-07 14:51:49.266153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.753 [2024-10-07 14:51:49.266164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.753 [2024-10-07 14:51:49.266177] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.753 [2024-10-07 14:51:49.266199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.753 qpair failed and we were unable to recover it. 00:41:25.753 [2024-10-07 14:51:49.276064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.753 [2024-10-07 14:51:49.276147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.753 [2024-10-07 14:51:49.276168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.753 [2024-10-07 14:51:49.276180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.753 [2024-10-07 14:51:49.276190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.753 [2024-10-07 14:51:49.276212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.753 qpair failed and we were unable to recover it. 00:41:25.753 [2024-10-07 14:51:49.286054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.753 [2024-10-07 14:51:49.286126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.753 [2024-10-07 14:51:49.286148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.753 [2024-10-07 14:51:49.286160] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.753 [2024-10-07 14:51:49.286169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.753 [2024-10-07 14:51:49.286191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.753 qpair failed and we were unable to recover it. 00:41:25.753 [2024-10-07 14:51:49.296166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.753 [2024-10-07 14:51:49.296236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.753 [2024-10-07 14:51:49.296257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.753 [2024-10-07 14:51:49.296269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.753 [2024-10-07 14:51:49.296279] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.753 [2024-10-07 14:51:49.296301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.753 qpair failed and we were unable to recover it. 00:41:25.753 [2024-10-07 14:51:49.306110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.753 [2024-10-07 14:51:49.306192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.753 [2024-10-07 14:51:49.306214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.753 [2024-10-07 14:51:49.306226] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.753 [2024-10-07 14:51:49.306236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.753 [2024-10-07 14:51:49.306258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.753 qpair failed and we were unable to recover it. 00:41:25.753 [2024-10-07 14:51:49.316142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.753 [2024-10-07 14:51:49.316219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.753 [2024-10-07 14:51:49.316240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.753 [2024-10-07 14:51:49.316252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.753 [2024-10-07 14:51:49.316262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.753 [2024-10-07 14:51:49.316283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.753 qpair failed and we were unable to recover it. 00:41:25.753 [2024-10-07 14:51:49.326245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.753 [2024-10-07 14:51:49.326323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.753 [2024-10-07 14:51:49.326345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.753 [2024-10-07 14:51:49.326356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.753 [2024-10-07 14:51:49.326365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.753 [2024-10-07 14:51:49.326387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.754 qpair failed and we were unable to recover it. 00:41:25.754 [2024-10-07 14:51:49.336250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.754 [2024-10-07 14:51:49.336348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.754 [2024-10-07 14:51:49.336374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.754 [2024-10-07 14:51:49.336386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.754 [2024-10-07 14:51:49.336395] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.754 [2024-10-07 14:51:49.336417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.754 qpair failed and we were unable to recover it. 00:41:25.754 [2024-10-07 14:51:49.346424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.754 [2024-10-07 14:51:49.346499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.754 [2024-10-07 14:51:49.346520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.754 [2024-10-07 14:51:49.346531] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.754 [2024-10-07 14:51:49.346540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.754 [2024-10-07 14:51:49.346562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.754 qpair failed and we were unable to recover it. 00:41:25.754 [2024-10-07 14:51:49.356324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.754 [2024-10-07 14:51:49.356398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.754 [2024-10-07 14:51:49.356419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.754 [2024-10-07 14:51:49.356431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.754 [2024-10-07 14:51:49.356444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.754 [2024-10-07 14:51:49.356465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.754 qpair failed and we were unable to recover it. 00:41:25.754 [2024-10-07 14:51:49.366371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.754 [2024-10-07 14:51:49.366450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.754 [2024-10-07 14:51:49.366471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.754 [2024-10-07 14:51:49.366482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.754 [2024-10-07 14:51:49.366492] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.754 [2024-10-07 14:51:49.366514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.754 qpair failed and we were unable to recover it. 00:41:25.754 [2024-10-07 14:51:49.376328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.754 [2024-10-07 14:51:49.376398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.754 [2024-10-07 14:51:49.376420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.754 [2024-10-07 14:51:49.376432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.754 [2024-10-07 14:51:49.376441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.754 [2024-10-07 14:51:49.376462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.754 qpair failed and we were unable to recover it. 00:41:25.754 [2024-10-07 14:51:49.386400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.754 [2024-10-07 14:51:49.386471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.754 [2024-10-07 14:51:49.386493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.754 [2024-10-07 14:51:49.386504] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.754 [2024-10-07 14:51:49.386514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.754 [2024-10-07 14:51:49.386535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.754 qpair failed and we were unable to recover it. 00:41:25.754 [2024-10-07 14:51:49.396401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.754 [2024-10-07 14:51:49.396484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.754 [2024-10-07 14:51:49.396506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.754 [2024-10-07 14:51:49.396518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.754 [2024-10-07 14:51:49.396527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.754 [2024-10-07 14:51:49.396548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.754 qpair failed and we were unable to recover it. 00:41:25.754 [2024-10-07 14:51:49.406514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.754 [2024-10-07 14:51:49.406586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.754 [2024-10-07 14:51:49.406607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.754 [2024-10-07 14:51:49.406619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.754 [2024-10-07 14:51:49.406628] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.754 [2024-10-07 14:51:49.406649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.754 qpair failed and we were unable to recover it. 00:41:25.754 [2024-10-07 14:51:49.416441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.754 [2024-10-07 14:51:49.416517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.754 [2024-10-07 14:51:49.416539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.754 [2024-10-07 14:51:49.416553] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.754 [2024-10-07 14:51:49.416563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.754 [2024-10-07 14:51:49.416584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.754 qpair failed and we were unable to recover it. 00:41:25.754 [2024-10-07 14:51:49.426515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.754 [2024-10-07 14:51:49.426587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.754 [2024-10-07 14:51:49.426608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.754 [2024-10-07 14:51:49.426619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.754 [2024-10-07 14:51:49.426629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.754 [2024-10-07 14:51:49.426651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.754 qpair failed and we were unable to recover it. 00:41:25.754 [2024-10-07 14:51:49.436554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.754 [2024-10-07 14:51:49.436628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.754 [2024-10-07 14:51:49.436650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.754 [2024-10-07 14:51:49.436661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.754 [2024-10-07 14:51:49.436670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.754 [2024-10-07 14:51:49.436692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.754 qpair failed and we were unable to recover it. 00:41:25.754 [2024-10-07 14:51:49.446568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.754 [2024-10-07 14:51:49.446640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.754 [2024-10-07 14:51:49.446662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.754 [2024-10-07 14:51:49.446676] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.754 [2024-10-07 14:51:49.446686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.754 [2024-10-07 14:51:49.446707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.754 qpair failed and we were unable to recover it. 00:41:25.754 [2024-10-07 14:51:49.456700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:25.754 [2024-10-07 14:51:49.456798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:25.754 [2024-10-07 14:51:49.456830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:25.754 [2024-10-07 14:51:49.456844] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:25.754 [2024-10-07 14:51:49.456854] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:25.754 [2024-10-07 14:51:49.456881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:25.754 qpair failed and we were unable to recover it. 00:41:26.016 [2024-10-07 14:51:49.466639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.016 [2024-10-07 14:51:49.466714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.016 [2024-10-07 14:51:49.466738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.016 [2024-10-07 14:51:49.466751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.016 [2024-10-07 14:51:49.466761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.016 [2024-10-07 14:51:49.466784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.016 qpair failed and we were unable to recover it. 00:41:26.016 [2024-10-07 14:51:49.476659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.016 [2024-10-07 14:51:49.476738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.016 [2024-10-07 14:51:49.476761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.016 [2024-10-07 14:51:49.476773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.016 [2024-10-07 14:51:49.476783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.016 [2024-10-07 14:51:49.476805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.016 qpair failed and we were unable to recover it. 00:41:26.016 [2024-10-07 14:51:49.486693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.016 [2024-10-07 14:51:49.486778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.016 [2024-10-07 14:51:49.486810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.016 [2024-10-07 14:51:49.486824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.016 [2024-10-07 14:51:49.486835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.016 [2024-10-07 14:51:49.486862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.016 qpair failed and we were unable to recover it. 00:41:26.016 [2024-10-07 14:51:49.496743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.016 [2024-10-07 14:51:49.496818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.016 [2024-10-07 14:51:49.496842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.016 [2024-10-07 14:51:49.496855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.016 [2024-10-07 14:51:49.496865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.016 [2024-10-07 14:51:49.496888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.016 qpair failed and we were unable to recover it. 00:41:26.016 [2024-10-07 14:51:49.506748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.016 [2024-10-07 14:51:49.506829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.016 [2024-10-07 14:51:49.506851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.016 [2024-10-07 14:51:49.506863] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.016 [2024-10-07 14:51:49.506873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.016 [2024-10-07 14:51:49.506895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.016 qpair failed and we were unable to recover it. 00:41:26.016 [2024-10-07 14:51:49.516736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.016 [2024-10-07 14:51:49.516813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.016 [2024-10-07 14:51:49.516834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.016 [2024-10-07 14:51:49.516846] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.016 [2024-10-07 14:51:49.516856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.016 [2024-10-07 14:51:49.516884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.016 qpair failed and we were unable to recover it. 00:41:26.016 [2024-10-07 14:51:49.526785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.016 [2024-10-07 14:51:49.526856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.016 [2024-10-07 14:51:49.526878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.016 [2024-10-07 14:51:49.526890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.016 [2024-10-07 14:51:49.526900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.016 [2024-10-07 14:51:49.526921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.017 qpair failed and we were unable to recover it. 00:41:26.017 [2024-10-07 14:51:49.536829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.017 [2024-10-07 14:51:49.536904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.017 [2024-10-07 14:51:49.536926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.017 [2024-10-07 14:51:49.536941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.017 [2024-10-07 14:51:49.536951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.017 [2024-10-07 14:51:49.536972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.017 qpair failed and we were unable to recover it. 00:41:26.017 [2024-10-07 14:51:49.546841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.017 [2024-10-07 14:51:49.546931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.017 [2024-10-07 14:51:49.546953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.017 [2024-10-07 14:51:49.546965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.017 [2024-10-07 14:51:49.546975] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.017 [2024-10-07 14:51:49.546996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.017 qpair failed and we were unable to recover it. 00:41:26.017 [2024-10-07 14:51:49.556889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.017 [2024-10-07 14:51:49.556964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.017 [2024-10-07 14:51:49.556986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.017 [2024-10-07 14:51:49.556997] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.017 [2024-10-07 14:51:49.557015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.017 [2024-10-07 14:51:49.557037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.017 qpair failed and we were unable to recover it. 00:41:26.017 [2024-10-07 14:51:49.567049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.017 [2024-10-07 14:51:49.567124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.017 [2024-10-07 14:51:49.567145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.017 [2024-10-07 14:51:49.567157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.017 [2024-10-07 14:51:49.567165] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.017 [2024-10-07 14:51:49.567187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.017 qpair failed and we were unable to recover it. 00:41:26.017 [2024-10-07 14:51:49.576904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.017 [2024-10-07 14:51:49.576983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.017 [2024-10-07 14:51:49.577013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.017 [2024-10-07 14:51:49.577025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.017 [2024-10-07 14:51:49.577035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.017 [2024-10-07 14:51:49.577057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.017 qpair failed and we were unable to recover it. 00:41:26.017 [2024-10-07 14:51:49.586959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.017 [2024-10-07 14:51:49.587042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.017 [2024-10-07 14:51:49.587064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.017 [2024-10-07 14:51:49.587076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.017 [2024-10-07 14:51:49.587085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.017 [2024-10-07 14:51:49.587113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.017 qpair failed and we were unable to recover it. 00:41:26.017 [2024-10-07 14:51:49.597007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.017 [2024-10-07 14:51:49.597083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.017 [2024-10-07 14:51:49.597105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.017 [2024-10-07 14:51:49.597116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.017 [2024-10-07 14:51:49.597126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.017 [2024-10-07 14:51:49.597147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.017 qpair failed and we were unable to recover it. 00:41:26.017 [2024-10-07 14:51:49.606978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.017 [2024-10-07 14:51:49.607051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.017 [2024-10-07 14:51:49.607073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.017 [2024-10-07 14:51:49.607084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.017 [2024-10-07 14:51:49.607094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.017 [2024-10-07 14:51:49.607115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.017 qpair failed and we were unable to recover it. 00:41:26.017 [2024-10-07 14:51:49.617035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.017 [2024-10-07 14:51:49.617114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.017 [2024-10-07 14:51:49.617135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.017 [2024-10-07 14:51:49.617146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.017 [2024-10-07 14:51:49.617155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.017 [2024-10-07 14:51:49.617177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.017 qpair failed and we were unable to recover it. 00:41:26.017 [2024-10-07 14:51:49.626996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.017 [2024-10-07 14:51:49.627074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.017 [2024-10-07 14:51:49.627099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.017 [2024-10-07 14:51:49.627111] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.017 [2024-10-07 14:51:49.627120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.017 [2024-10-07 14:51:49.627142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.017 qpair failed and we were unable to recover it. 00:41:26.017 [2024-10-07 14:51:49.637114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.017 [2024-10-07 14:51:49.637195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.017 [2024-10-07 14:51:49.637216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.017 [2024-10-07 14:51:49.637228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.017 [2024-10-07 14:51:49.637237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.017 [2024-10-07 14:51:49.637259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.017 qpair failed and we were unable to recover it. 00:41:26.017 [2024-10-07 14:51:49.647085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.017 [2024-10-07 14:51:49.647158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.017 [2024-10-07 14:51:49.647180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.017 [2024-10-07 14:51:49.647192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.017 [2024-10-07 14:51:49.647201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.017 [2024-10-07 14:51:49.647222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.017 qpair failed and we were unable to recover it. 00:41:26.017 [2024-10-07 14:51:49.657170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.017 [2024-10-07 14:51:49.657244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.017 [2024-10-07 14:51:49.657265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.017 [2024-10-07 14:51:49.657276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.017 [2024-10-07 14:51:49.657286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.017 [2024-10-07 14:51:49.657308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.017 qpair failed and we were unable to recover it. 00:41:26.017 [2024-10-07 14:51:49.667161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.017 [2024-10-07 14:51:49.667234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.017 [2024-10-07 14:51:49.667255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.017 [2024-10-07 14:51:49.667266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.017 [2024-10-07 14:51:49.667276] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.018 [2024-10-07 14:51:49.667300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.018 qpair failed and we were unable to recover it. 00:41:26.018 [2024-10-07 14:51:49.677248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.018 [2024-10-07 14:51:49.677330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.018 [2024-10-07 14:51:49.677351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.018 [2024-10-07 14:51:49.677363] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.018 [2024-10-07 14:51:49.677373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.018 [2024-10-07 14:51:49.677394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.018 qpair failed and we were unable to recover it. 00:41:26.018 [2024-10-07 14:51:49.687248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.018 [2024-10-07 14:51:49.687322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.018 [2024-10-07 14:51:49.687344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.018 [2024-10-07 14:51:49.687356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.018 [2024-10-07 14:51:49.687366] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.018 [2024-10-07 14:51:49.687388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.018 qpair failed and we were unable to recover it. 00:41:26.018 [2024-10-07 14:51:49.697276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.018 [2024-10-07 14:51:49.697345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.018 [2024-10-07 14:51:49.697366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.018 [2024-10-07 14:51:49.697378] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.018 [2024-10-07 14:51:49.697387] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.018 [2024-10-07 14:51:49.697409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.018 qpair failed and we were unable to recover it. 00:41:26.018 [2024-10-07 14:51:49.707329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.018 [2024-10-07 14:51:49.707402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.018 [2024-10-07 14:51:49.707424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.018 [2024-10-07 14:51:49.707435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.018 [2024-10-07 14:51:49.707444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.018 [2024-10-07 14:51:49.707466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.018 qpair failed and we were unable to recover it. 00:41:26.018 [2024-10-07 14:51:49.717322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.018 [2024-10-07 14:51:49.717403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.018 [2024-10-07 14:51:49.717427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.018 [2024-10-07 14:51:49.717439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.018 [2024-10-07 14:51:49.717448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.018 [2024-10-07 14:51:49.717470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.018 qpair failed and we were unable to recover it. 00:41:26.280 [2024-10-07 14:51:49.727402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.280 [2024-10-07 14:51:49.727471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.280 [2024-10-07 14:51:49.727493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.280 [2024-10-07 14:51:49.727504] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.280 [2024-10-07 14:51:49.727513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.280 [2024-10-07 14:51:49.727535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.280 qpair failed and we were unable to recover it. 00:41:26.280 [2024-10-07 14:51:49.737400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.280 [2024-10-07 14:51:49.737481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.280 [2024-10-07 14:51:49.737503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.280 [2024-10-07 14:51:49.737514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.280 [2024-10-07 14:51:49.737523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.280 [2024-10-07 14:51:49.737544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.280 qpair failed and we were unable to recover it. 00:41:26.280 [2024-10-07 14:51:49.747406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.280 [2024-10-07 14:51:49.747497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.280 [2024-10-07 14:51:49.747519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.280 [2024-10-07 14:51:49.747530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.280 [2024-10-07 14:51:49.747539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.280 [2024-10-07 14:51:49.747560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.280 qpair failed and we were unable to recover it. 00:41:26.280 [2024-10-07 14:51:49.757440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.280 [2024-10-07 14:51:49.757516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.280 [2024-10-07 14:51:49.757537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.280 [2024-10-07 14:51:49.757548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.280 [2024-10-07 14:51:49.757561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.280 [2024-10-07 14:51:49.757583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.280 qpair failed and we were unable to recover it. 00:41:26.280 [2024-10-07 14:51:49.767414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.280 [2024-10-07 14:51:49.767489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.280 [2024-10-07 14:51:49.767510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.280 [2024-10-07 14:51:49.767522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.280 [2024-10-07 14:51:49.767531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.280 [2024-10-07 14:51:49.767552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.280 qpair failed and we were unable to recover it. 00:41:26.280 [2024-10-07 14:51:49.777477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.280 [2024-10-07 14:51:49.777548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.280 [2024-10-07 14:51:49.777569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.280 [2024-10-07 14:51:49.777581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.280 [2024-10-07 14:51:49.777590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.280 [2024-10-07 14:51:49.777612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.280 qpair failed and we were unable to recover it. 00:41:26.280 [2024-10-07 14:51:49.787540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.280 [2024-10-07 14:51:49.787616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.280 [2024-10-07 14:51:49.787638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.280 [2024-10-07 14:51:49.787653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.280 [2024-10-07 14:51:49.787662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.280 [2024-10-07 14:51:49.787684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.280 qpair failed and we were unable to recover it. 00:41:26.280 [2024-10-07 14:51:49.797458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.280 [2024-10-07 14:51:49.797531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.280 [2024-10-07 14:51:49.797552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.280 [2024-10-07 14:51:49.797564] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.280 [2024-10-07 14:51:49.797574] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.280 [2024-10-07 14:51:49.797597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.280 qpair failed and we were unable to recover it. 00:41:26.280 [2024-10-07 14:51:49.807592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.280 [2024-10-07 14:51:49.807672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.280 [2024-10-07 14:51:49.807696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.280 [2024-10-07 14:51:49.807708] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.280 [2024-10-07 14:51:49.807718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.280 [2024-10-07 14:51:49.807741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.280 qpair failed and we were unable to recover it. 00:41:26.280 [2024-10-07 14:51:49.817649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.280 [2024-10-07 14:51:49.817722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.280 [2024-10-07 14:51:49.817745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.280 [2024-10-07 14:51:49.817756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.280 [2024-10-07 14:51:49.817766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.280 [2024-10-07 14:51:49.817788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.280 qpair failed and we were unable to recover it. 00:41:26.280 [2024-10-07 14:51:49.827640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.280 [2024-10-07 14:51:49.827719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.280 [2024-10-07 14:51:49.827741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.280 [2024-10-07 14:51:49.827753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.280 [2024-10-07 14:51:49.827763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.280 [2024-10-07 14:51:49.827784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.280 qpair failed and we were unable to recover it. 00:41:26.280 [2024-10-07 14:51:49.837588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.280 [2024-10-07 14:51:49.837662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.280 [2024-10-07 14:51:49.837684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.280 [2024-10-07 14:51:49.837696] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.280 [2024-10-07 14:51:49.837706] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.280 [2024-10-07 14:51:49.837727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.280 qpair failed and we were unable to recover it. 00:41:26.281 [2024-10-07 14:51:49.847681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.281 [2024-10-07 14:51:49.847760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.281 [2024-10-07 14:51:49.847782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.281 [2024-10-07 14:51:49.847800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.281 [2024-10-07 14:51:49.847812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.281 [2024-10-07 14:51:49.847837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.281 qpair failed and we were unable to recover it. 00:41:26.281 [2024-10-07 14:51:49.857720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.281 [2024-10-07 14:51:49.857802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.281 [2024-10-07 14:51:49.857835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.281 [2024-10-07 14:51:49.857850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.281 [2024-10-07 14:51:49.857860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.281 [2024-10-07 14:51:49.857889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.281 qpair failed and we were unable to recover it. 00:41:26.281 [2024-10-07 14:51:49.867731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.281 [2024-10-07 14:51:49.867810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.281 [2024-10-07 14:51:49.867834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.281 [2024-10-07 14:51:49.867846] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.281 [2024-10-07 14:51:49.867856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.281 [2024-10-07 14:51:49.867879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.281 qpair failed and we were unable to recover it. 00:41:26.281 [2024-10-07 14:51:49.877805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.281 [2024-10-07 14:51:49.877881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.281 [2024-10-07 14:51:49.877903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.281 [2024-10-07 14:51:49.877915] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.281 [2024-10-07 14:51:49.877925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.281 [2024-10-07 14:51:49.877948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.281 qpair failed and we were unable to recover it. 00:41:26.281 [2024-10-07 14:51:49.887836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.281 [2024-10-07 14:51:49.887928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.281 [2024-10-07 14:51:49.887950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.281 [2024-10-07 14:51:49.887961] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.281 [2024-10-07 14:51:49.887970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.281 [2024-10-07 14:51:49.887991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.281 qpair failed and we were unable to recover it. 00:41:26.281 [2024-10-07 14:51:49.897844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.281 [2024-10-07 14:51:49.897920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.281 [2024-10-07 14:51:49.897942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.281 [2024-10-07 14:51:49.897953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.281 [2024-10-07 14:51:49.897963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.281 [2024-10-07 14:51:49.897985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.281 qpair failed and we were unable to recover it. 00:41:26.281 [2024-10-07 14:51:49.907836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.281 [2024-10-07 14:51:49.907908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.281 [2024-10-07 14:51:49.907930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.281 [2024-10-07 14:51:49.907941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.281 [2024-10-07 14:51:49.907951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.281 [2024-10-07 14:51:49.907972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.281 qpair failed and we were unable to recover it. 00:41:26.281 [2024-10-07 14:51:49.917911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.281 [2024-10-07 14:51:49.918019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.281 [2024-10-07 14:51:49.918040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.281 [2024-10-07 14:51:49.918052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.281 [2024-10-07 14:51:49.918061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.281 [2024-10-07 14:51:49.918083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.281 qpair failed and we were unable to recover it. 00:41:26.281 [2024-10-07 14:51:49.927858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.281 [2024-10-07 14:51:49.927949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.281 [2024-10-07 14:51:49.927972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.281 [2024-10-07 14:51:49.927985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.281 [2024-10-07 14:51:49.927994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.281 [2024-10-07 14:51:49.928022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.281 qpair failed and we were unable to recover it. 00:41:26.281 [2024-10-07 14:51:49.937919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.281 [2024-10-07 14:51:49.937991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.281 [2024-10-07 14:51:49.938019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.281 [2024-10-07 14:51:49.938035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.281 [2024-10-07 14:51:49.938044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.281 [2024-10-07 14:51:49.938066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.281 qpair failed and we were unable to recover it. 00:41:26.281 [2024-10-07 14:51:49.948032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.281 [2024-10-07 14:51:49.948151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.281 [2024-10-07 14:51:49.948172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.281 [2024-10-07 14:51:49.948184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.281 [2024-10-07 14:51:49.948194] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.281 [2024-10-07 14:51:49.948215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.281 qpair failed and we were unable to recover it. 00:41:26.281 [2024-10-07 14:51:49.957916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.281 [2024-10-07 14:51:49.957992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.281 [2024-10-07 14:51:49.958019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.281 [2024-10-07 14:51:49.958030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.281 [2024-10-07 14:51:49.958039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.281 [2024-10-07 14:51:49.958061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.281 qpair failed and we were unable to recover it. 00:41:26.281 [2024-10-07 14:51:49.967988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.281 [2024-10-07 14:51:49.968066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.281 [2024-10-07 14:51:49.968088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.281 [2024-10-07 14:51:49.968099] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.281 [2024-10-07 14:51:49.968108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.281 [2024-10-07 14:51:49.968129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.281 qpair failed and we were unable to recover it. 00:41:26.281 [2024-10-07 14:51:49.978078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.281 [2024-10-07 14:51:49.978165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.281 [2024-10-07 14:51:49.978186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.281 [2024-10-07 14:51:49.978197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.281 [2024-10-07 14:51:49.978206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.282 [2024-10-07 14:51:49.978227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.282 qpair failed and we were unable to recover it. 00:41:26.543 [2024-10-07 14:51:49.988080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.543 [2024-10-07 14:51:49.988157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.543 [2024-10-07 14:51:49.988179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.543 [2024-10-07 14:51:49.988191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.543 [2024-10-07 14:51:49.988200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.543 [2024-10-07 14:51:49.988221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.543 qpair failed and we were unable to recover it. 00:41:26.543 [2024-10-07 14:51:49.998138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.543 [2024-10-07 14:51:49.998215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.543 [2024-10-07 14:51:49.998236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.543 [2024-10-07 14:51:49.998247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.543 [2024-10-07 14:51:49.998257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.543 [2024-10-07 14:51:49.998278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.543 qpair failed and we were unable to recover it. 00:41:26.543 [2024-10-07 14:51:50.008252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.543 [2024-10-07 14:51:50.008335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.543 [2024-10-07 14:51:50.008357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.543 [2024-10-07 14:51:50.008369] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.543 [2024-10-07 14:51:50.008378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.543 [2024-10-07 14:51:50.008401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.543 qpair failed and we were unable to recover it. 00:41:26.543 [2024-10-07 14:51:50.018197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.543 [2024-10-07 14:51:50.018277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.543 [2024-10-07 14:51:50.018299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.543 [2024-10-07 14:51:50.018311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.543 [2024-10-07 14:51:50.018320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.543 [2024-10-07 14:51:50.018343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.543 qpair failed and we were unable to recover it. 00:41:26.543 [2024-10-07 14:51:50.028161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.543 [2024-10-07 14:51:50.028253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.543 [2024-10-07 14:51:50.028277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.543 [2024-10-07 14:51:50.028295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.543 [2024-10-07 14:51:50.028305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.543 [2024-10-07 14:51:50.028328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.543 qpair failed and we were unable to recover it. 00:41:26.543 [2024-10-07 14:51:50.038693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.543 [2024-10-07 14:51:50.038793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.543 [2024-10-07 14:51:50.038814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.543 [2024-10-07 14:51:50.038826] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.543 [2024-10-07 14:51:50.038836] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.543 [2024-10-07 14:51:50.038858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.543 qpair failed and we were unable to recover it. 00:41:26.543 [2024-10-07 14:51:50.048238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.543 [2024-10-07 14:51:50.048325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.543 [2024-10-07 14:51:50.048347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.543 [2024-10-07 14:51:50.048359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.543 [2024-10-07 14:51:50.048368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.543 [2024-10-07 14:51:50.048389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.543 qpair failed and we were unable to recover it. 00:41:26.543 [2024-10-07 14:51:50.058353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.543 [2024-10-07 14:51:50.058437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.543 [2024-10-07 14:51:50.058458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.543 [2024-10-07 14:51:50.058470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.543 [2024-10-07 14:51:50.058479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.543 [2024-10-07 14:51:50.058501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.543 qpair failed and we were unable to recover it. 00:41:26.543 [2024-10-07 14:51:50.068342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.543 [2024-10-07 14:51:50.068422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.543 [2024-10-07 14:51:50.068443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.543 [2024-10-07 14:51:50.068455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.543 [2024-10-07 14:51:50.068463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.543 [2024-10-07 14:51:50.068485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.543 qpair failed and we were unable to recover it. 00:41:26.543 [2024-10-07 14:51:50.078374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.543 [2024-10-07 14:51:50.078458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.543 [2024-10-07 14:51:50.078479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.543 [2024-10-07 14:51:50.078490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.543 [2024-10-07 14:51:50.078499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.543 [2024-10-07 14:51:50.078520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.544 qpair failed and we were unable to recover it. 00:41:26.544 [2024-10-07 14:51:50.088367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.544 [2024-10-07 14:51:50.088447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.544 [2024-10-07 14:51:50.088469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.544 [2024-10-07 14:51:50.088481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.544 [2024-10-07 14:51:50.088490] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.544 [2024-10-07 14:51:50.088512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.544 qpair failed and we were unable to recover it. 00:41:26.544 [2024-10-07 14:51:50.098347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.544 [2024-10-07 14:51:50.098418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.544 [2024-10-07 14:51:50.098439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.544 [2024-10-07 14:51:50.098451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.544 [2024-10-07 14:51:50.098460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.544 [2024-10-07 14:51:50.098481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.544 qpair failed and we were unable to recover it. 00:41:26.544 [2024-10-07 14:51:50.108399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.544 [2024-10-07 14:51:50.108497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.544 [2024-10-07 14:51:50.108519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.544 [2024-10-07 14:51:50.108531] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.544 [2024-10-07 14:51:50.108540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.544 [2024-10-07 14:51:50.108561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.544 qpair failed and we were unable to recover it. 00:41:26.544 [2024-10-07 14:51:50.118352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.544 [2024-10-07 14:51:50.118426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.544 [2024-10-07 14:51:50.118451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.544 [2024-10-07 14:51:50.118463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.544 [2024-10-07 14:51:50.118472] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.544 [2024-10-07 14:51:50.118495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.544 qpair failed and we were unable to recover it. 00:41:26.544 [2024-10-07 14:51:50.128488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.544 [2024-10-07 14:51:50.128561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.544 [2024-10-07 14:51:50.128582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.544 [2024-10-07 14:51:50.128593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.544 [2024-10-07 14:51:50.128603] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.544 [2024-10-07 14:51:50.128624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.544 qpair failed and we were unable to recover it. 00:41:26.544 [2024-10-07 14:51:50.138522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.544 [2024-10-07 14:51:50.138595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.544 [2024-10-07 14:51:50.138616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.544 [2024-10-07 14:51:50.138628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.544 [2024-10-07 14:51:50.138637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.544 [2024-10-07 14:51:50.138659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.544 qpair failed and we were unable to recover it. 00:41:26.544 [2024-10-07 14:51:50.148492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.544 [2024-10-07 14:51:50.148567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.544 [2024-10-07 14:51:50.148588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.544 [2024-10-07 14:51:50.148600] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.544 [2024-10-07 14:51:50.148609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.544 [2024-10-07 14:51:50.148630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.544 qpair failed and we were unable to recover it. 00:41:26.544 [2024-10-07 14:51:50.158586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.544 [2024-10-07 14:51:50.158668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.544 [2024-10-07 14:51:50.158690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.544 [2024-10-07 14:51:50.158701] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.544 [2024-10-07 14:51:50.158710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.544 [2024-10-07 14:51:50.158736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.544 qpair failed and we were unable to recover it. 00:41:26.544 [2024-10-07 14:51:50.168545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.544 [2024-10-07 14:51:50.168617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.544 [2024-10-07 14:51:50.168639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.544 [2024-10-07 14:51:50.168650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.544 [2024-10-07 14:51:50.168659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.544 [2024-10-07 14:51:50.168680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.544 qpair failed and we were unable to recover it. 00:41:26.544 [2024-10-07 14:51:50.178615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.544 [2024-10-07 14:51:50.178725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.544 [2024-10-07 14:51:50.178758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.544 [2024-10-07 14:51:50.178772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.544 [2024-10-07 14:51:50.178783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.544 [2024-10-07 14:51:50.178823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.544 qpair failed and we were unable to recover it. 00:41:26.544 [2024-10-07 14:51:50.188668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.544 [2024-10-07 14:51:50.188748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.544 [2024-10-07 14:51:50.188772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.544 [2024-10-07 14:51:50.188784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.544 [2024-10-07 14:51:50.188794] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.544 [2024-10-07 14:51:50.188818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.544 qpair failed and we were unable to recover it. 00:41:26.544 [2024-10-07 14:51:50.198668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.544 [2024-10-07 14:51:50.198744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.544 [2024-10-07 14:51:50.198766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.544 [2024-10-07 14:51:50.198778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.544 [2024-10-07 14:51:50.198788] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.544 [2024-10-07 14:51:50.198810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.544 qpair failed and we were unable to recover it. 00:41:26.544 [2024-10-07 14:51:50.208608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.544 [2024-10-07 14:51:50.208688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.544 [2024-10-07 14:51:50.208714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.544 [2024-10-07 14:51:50.208726] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.544 [2024-10-07 14:51:50.208736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.544 [2024-10-07 14:51:50.208758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.544 qpair failed and we were unable to recover it. 00:41:26.544 [2024-10-07 14:51:50.218650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.544 [2024-10-07 14:51:50.218752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.544 [2024-10-07 14:51:50.218774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.544 [2024-10-07 14:51:50.218785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.544 [2024-10-07 14:51:50.218795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.544 [2024-10-07 14:51:50.218816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.544 qpair failed and we were unable to recover it. 00:41:26.545 [2024-10-07 14:51:50.228735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.545 [2024-10-07 14:51:50.228832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.545 [2024-10-07 14:51:50.228865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.545 [2024-10-07 14:51:50.228880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.545 [2024-10-07 14:51:50.228890] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.545 [2024-10-07 14:51:50.228917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.545 qpair failed and we were unable to recover it. 00:41:26.545 [2024-10-07 14:51:50.238787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.545 [2024-10-07 14:51:50.238873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.545 [2024-10-07 14:51:50.238897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.545 [2024-10-07 14:51:50.238909] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.545 [2024-10-07 14:51:50.238919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.545 [2024-10-07 14:51:50.238942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.545 qpair failed and we were unable to recover it. 00:41:26.545 [2024-10-07 14:51:50.248779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.545 [2024-10-07 14:51:50.248853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.545 [2024-10-07 14:51:50.248876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.545 [2024-10-07 14:51:50.248887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.545 [2024-10-07 14:51:50.248897] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.545 [2024-10-07 14:51:50.248924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.545 qpair failed and we were unable to recover it. 00:41:26.806 [2024-10-07 14:51:50.258801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.806 [2024-10-07 14:51:50.258877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.806 [2024-10-07 14:51:50.258899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.806 [2024-10-07 14:51:50.258910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.806 [2024-10-07 14:51:50.258920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.806 [2024-10-07 14:51:50.258941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.806 qpair failed and we were unable to recover it. 00:41:26.806 [2024-10-07 14:51:50.268839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.806 [2024-10-07 14:51:50.268908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.806 [2024-10-07 14:51:50.268929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.806 [2024-10-07 14:51:50.268940] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.806 [2024-10-07 14:51:50.268949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.806 [2024-10-07 14:51:50.268971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.806 qpair failed and we were unable to recover it. 00:41:26.806 [2024-10-07 14:51:50.278791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.806 [2024-10-07 14:51:50.278866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.806 [2024-10-07 14:51:50.278887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.806 [2024-10-07 14:51:50.278898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.806 [2024-10-07 14:51:50.278908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.806 [2024-10-07 14:51:50.278930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.806 qpair failed and we were unable to recover it. 00:41:26.806 [2024-10-07 14:51:50.288910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.806 [2024-10-07 14:51:50.288991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.806 [2024-10-07 14:51:50.289018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.806 [2024-10-07 14:51:50.289030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.806 [2024-10-07 14:51:50.289040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.806 [2024-10-07 14:51:50.289062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.806 qpair failed and we were unable to recover it. 00:41:26.806 [2024-10-07 14:51:50.298943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.806 [2024-10-07 14:51:50.299024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.806 [2024-10-07 14:51:50.299049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.806 [2024-10-07 14:51:50.299061] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.806 [2024-10-07 14:51:50.299070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.806 [2024-10-07 14:51:50.299091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.806 qpair failed and we were unable to recover it. 00:41:26.806 [2024-10-07 14:51:50.308991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.806 [2024-10-07 14:51:50.309092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.806 [2024-10-07 14:51:50.309113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.806 [2024-10-07 14:51:50.309125] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.806 [2024-10-07 14:51:50.309134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.806 [2024-10-07 14:51:50.309156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.806 qpair failed and we were unable to recover it. 00:41:26.806 [2024-10-07 14:51:50.318986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.806 [2024-10-07 14:51:50.319071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.806 [2024-10-07 14:51:50.319093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.806 [2024-10-07 14:51:50.319105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.806 [2024-10-07 14:51:50.319115] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.806 [2024-10-07 14:51:50.319136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.806 qpair failed and we were unable to recover it. 00:41:26.806 [2024-10-07 14:51:50.328993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.806 [2024-10-07 14:51:50.329079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.806 [2024-10-07 14:51:50.329100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.806 [2024-10-07 14:51:50.329111] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.806 [2024-10-07 14:51:50.329121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.806 [2024-10-07 14:51:50.329142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.806 qpair failed and we were unable to recover it. 00:41:26.806 [2024-10-07 14:51:50.339060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.806 [2024-10-07 14:51:50.339145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.806 [2024-10-07 14:51:50.339167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.806 [2024-10-07 14:51:50.339182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.806 [2024-10-07 14:51:50.339194] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.806 [2024-10-07 14:51:50.339217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.806 qpair failed and we were unable to recover it. 00:41:26.806 [2024-10-07 14:51:50.349066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.806 [2024-10-07 14:51:50.349142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.806 [2024-10-07 14:51:50.349163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.806 [2024-10-07 14:51:50.349175] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.807 [2024-10-07 14:51:50.349184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.807 [2024-10-07 14:51:50.349205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.807 qpair failed and we were unable to recover it. 00:41:26.807 [2024-10-07 14:51:50.359096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.807 [2024-10-07 14:51:50.359175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.807 [2024-10-07 14:51:50.359196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.807 [2024-10-07 14:51:50.359208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.807 [2024-10-07 14:51:50.359224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.807 [2024-10-07 14:51:50.359247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.807 qpair failed and we were unable to recover it. 00:41:26.807 [2024-10-07 14:51:50.369070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.807 [2024-10-07 14:51:50.369141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.807 [2024-10-07 14:51:50.369162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.807 [2024-10-07 14:51:50.369174] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.807 [2024-10-07 14:51:50.369183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.807 [2024-10-07 14:51:50.369206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.807 qpair failed and we were unable to recover it. 00:41:26.807 [2024-10-07 14:51:50.379154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.807 [2024-10-07 14:51:50.379229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.807 [2024-10-07 14:51:50.379250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.807 [2024-10-07 14:51:50.379262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.807 [2024-10-07 14:51:50.379272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.807 [2024-10-07 14:51:50.379293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.807 qpair failed and we were unable to recover it. 00:41:26.807 [2024-10-07 14:51:50.389149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.807 [2024-10-07 14:51:50.389236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.807 [2024-10-07 14:51:50.389258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.807 [2024-10-07 14:51:50.389270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.807 [2024-10-07 14:51:50.389280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.807 [2024-10-07 14:51:50.389301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.807 qpair failed and we were unable to recover it. 00:41:26.807 [2024-10-07 14:51:50.399186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.807 [2024-10-07 14:51:50.399260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.807 [2024-10-07 14:51:50.399281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.807 [2024-10-07 14:51:50.399293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.807 [2024-10-07 14:51:50.399302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.807 [2024-10-07 14:51:50.399323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.807 qpair failed and we were unable to recover it. 00:41:26.807 [2024-10-07 14:51:50.409146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.807 [2024-10-07 14:51:50.409219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.807 [2024-10-07 14:51:50.409240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.807 [2024-10-07 14:51:50.409251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.807 [2024-10-07 14:51:50.409260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.807 [2024-10-07 14:51:50.409282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.807 qpair failed and we were unable to recover it. 00:41:26.807 [2024-10-07 14:51:50.419307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.807 [2024-10-07 14:51:50.419383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.807 [2024-10-07 14:51:50.419404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.807 [2024-10-07 14:51:50.419416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.807 [2024-10-07 14:51:50.419425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.807 [2024-10-07 14:51:50.419446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.807 qpair failed and we were unable to recover it. 00:41:26.807 [2024-10-07 14:51:50.429300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.807 [2024-10-07 14:51:50.429373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.807 [2024-10-07 14:51:50.429394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.807 [2024-10-07 14:51:50.429416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.807 [2024-10-07 14:51:50.429426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.807 [2024-10-07 14:51:50.429448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.807 qpair failed and we were unable to recover it. 00:41:26.807 [2024-10-07 14:51:50.439306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.807 [2024-10-07 14:51:50.439423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.807 [2024-10-07 14:51:50.439444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.807 [2024-10-07 14:51:50.439456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.807 [2024-10-07 14:51:50.439465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.807 [2024-10-07 14:51:50.439486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.807 qpair failed and we were unable to recover it. 00:41:26.807 [2024-10-07 14:51:50.449323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.807 [2024-10-07 14:51:50.449396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.807 [2024-10-07 14:51:50.449416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.807 [2024-10-07 14:51:50.449427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.807 [2024-10-07 14:51:50.449437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.807 [2024-10-07 14:51:50.449458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.807 qpair failed and we were unable to recover it. 00:41:26.807 [2024-10-07 14:51:50.459409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.807 [2024-10-07 14:51:50.459488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.807 [2024-10-07 14:51:50.459510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.807 [2024-10-07 14:51:50.459521] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.807 [2024-10-07 14:51:50.459530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.807 [2024-10-07 14:51:50.459552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.807 qpair failed and we were unable to recover it. 00:41:26.807 [2024-10-07 14:51:50.469381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.807 [2024-10-07 14:51:50.469497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.807 [2024-10-07 14:51:50.469518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.807 [2024-10-07 14:51:50.469530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.807 [2024-10-07 14:51:50.469538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.807 [2024-10-07 14:51:50.469560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.807 qpair failed and we were unable to recover it. 00:41:26.807 [2024-10-07 14:51:50.479414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.807 [2024-10-07 14:51:50.479489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.807 [2024-10-07 14:51:50.479511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.807 [2024-10-07 14:51:50.479522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.807 [2024-10-07 14:51:50.479532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.808 [2024-10-07 14:51:50.479553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.808 qpair failed and we were unable to recover it. 00:41:26.808 [2024-10-07 14:51:50.489531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.808 [2024-10-07 14:51:50.489632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.808 [2024-10-07 14:51:50.489654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.808 [2024-10-07 14:51:50.489665] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.808 [2024-10-07 14:51:50.489675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.808 [2024-10-07 14:51:50.489696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.808 qpair failed and we were unable to recover it. 00:41:26.808 [2024-10-07 14:51:50.499392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.808 [2024-10-07 14:51:50.499509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.808 [2024-10-07 14:51:50.499531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.808 [2024-10-07 14:51:50.499543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.808 [2024-10-07 14:51:50.499552] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.808 [2024-10-07 14:51:50.499573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.808 qpair failed and we were unable to recover it. 00:41:26.808 [2024-10-07 14:51:50.509517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:26.808 [2024-10-07 14:51:50.509619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:26.808 [2024-10-07 14:51:50.509640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:26.808 [2024-10-07 14:51:50.509651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:26.808 [2024-10-07 14:51:50.509661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:26.808 [2024-10-07 14:51:50.509698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:26.808 qpair failed and we were unable to recover it. 00:41:27.069 [2024-10-07 14:51:50.519539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.069 [2024-10-07 14:51:50.519617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.069 [2024-10-07 14:51:50.519641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.069 [2024-10-07 14:51:50.519661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.069 [2024-10-07 14:51:50.519670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.069 [2024-10-07 14:51:50.519692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.069 qpair failed and we were unable to recover it. 00:41:27.069 [2024-10-07 14:51:50.529584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.069 [2024-10-07 14:51:50.529654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.069 [2024-10-07 14:51:50.529675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.069 [2024-10-07 14:51:50.529687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.069 [2024-10-07 14:51:50.529696] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.069 [2024-10-07 14:51:50.529717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.069 qpair failed and we were unable to recover it. 00:41:27.069 [2024-10-07 14:51:50.539589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.069 [2024-10-07 14:51:50.539669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.069 [2024-10-07 14:51:50.539691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.069 [2024-10-07 14:51:50.539702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.069 [2024-10-07 14:51:50.539712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.069 [2024-10-07 14:51:50.539733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.069 qpair failed and we were unable to recover it. 00:41:27.069 [2024-10-07 14:51:50.549647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.069 [2024-10-07 14:51:50.549724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.069 [2024-10-07 14:51:50.549746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.069 [2024-10-07 14:51:50.549757] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.069 [2024-10-07 14:51:50.549766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.069 [2024-10-07 14:51:50.549787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.069 qpair failed and we were unable to recover it. 00:41:27.069 [2024-10-07 14:51:50.559652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.069 [2024-10-07 14:51:50.559726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.069 [2024-10-07 14:51:50.559748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.069 [2024-10-07 14:51:50.559759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.069 [2024-10-07 14:51:50.559768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.069 [2024-10-07 14:51:50.559790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.069 qpair failed and we were unable to recover it. 00:41:27.069 [2024-10-07 14:51:50.569653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.069 [2024-10-07 14:51:50.569721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.069 [2024-10-07 14:51:50.569742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.069 [2024-10-07 14:51:50.569753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.069 [2024-10-07 14:51:50.569762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.069 [2024-10-07 14:51:50.569783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.069 qpair failed and we were unable to recover it. 00:41:27.069 [2024-10-07 14:51:50.579706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.069 [2024-10-07 14:51:50.579789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.069 [2024-10-07 14:51:50.579820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.069 [2024-10-07 14:51:50.579835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.069 [2024-10-07 14:51:50.579846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.069 [2024-10-07 14:51:50.579873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.069 qpair failed and we were unable to recover it. 00:41:27.069 [2024-10-07 14:51:50.589680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.069 [2024-10-07 14:51:50.589757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.069 [2024-10-07 14:51:50.589781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.069 [2024-10-07 14:51:50.589793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.070 [2024-10-07 14:51:50.589802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.070 [2024-10-07 14:51:50.589825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.070 qpair failed and we were unable to recover it. 00:41:27.070 [2024-10-07 14:51:50.599752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.070 [2024-10-07 14:51:50.599836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.070 [2024-10-07 14:51:50.599858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.070 [2024-10-07 14:51:50.599869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.070 [2024-10-07 14:51:50.599879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.070 [2024-10-07 14:51:50.599899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.070 qpair failed and we were unable to recover it. 00:41:27.070 [2024-10-07 14:51:50.609796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.070 [2024-10-07 14:51:50.609896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.070 [2024-10-07 14:51:50.609921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.070 [2024-10-07 14:51:50.609933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.070 [2024-10-07 14:51:50.609942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.070 [2024-10-07 14:51:50.609964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.070 qpair failed and we were unable to recover it. 00:41:27.070 [2024-10-07 14:51:50.619808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.070 [2024-10-07 14:51:50.619886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.070 [2024-10-07 14:51:50.619913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.070 [2024-10-07 14:51:50.619925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.070 [2024-10-07 14:51:50.619934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.070 [2024-10-07 14:51:50.619955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.070 qpair failed and we were unable to recover it. 00:41:27.070 [2024-10-07 14:51:50.629865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.070 [2024-10-07 14:51:50.629971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.070 [2024-10-07 14:51:50.629992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.070 [2024-10-07 14:51:50.630008] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.070 [2024-10-07 14:51:50.630018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.070 [2024-10-07 14:51:50.630040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.070 qpair failed and we were unable to recover it. 00:41:27.070 [2024-10-07 14:51:50.639823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.070 [2024-10-07 14:51:50.639897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.070 [2024-10-07 14:51:50.639918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.070 [2024-10-07 14:51:50.639930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.070 [2024-10-07 14:51:50.639939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.070 [2024-10-07 14:51:50.639959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.070 qpair failed and we were unable to recover it. 00:41:27.070 [2024-10-07 14:51:50.649872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.070 [2024-10-07 14:51:50.649947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.070 [2024-10-07 14:51:50.649968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.070 [2024-10-07 14:51:50.649980] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.070 [2024-10-07 14:51:50.649989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.070 [2024-10-07 14:51:50.650023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.070 qpair failed and we were unable to recover it. 00:41:27.070 [2024-10-07 14:51:50.659908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.070 [2024-10-07 14:51:50.660035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.070 [2024-10-07 14:51:50.660057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.070 [2024-10-07 14:51:50.660068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.070 [2024-10-07 14:51:50.660077] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.070 [2024-10-07 14:51:50.660099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.070 qpair failed and we were unable to recover it. 00:41:27.070 [2024-10-07 14:51:50.669905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.070 [2024-10-07 14:51:50.669984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.070 [2024-10-07 14:51:50.670011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.070 [2024-10-07 14:51:50.670023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.070 [2024-10-07 14:51:50.670032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.070 [2024-10-07 14:51:50.670054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.070 qpair failed and we were unable to recover it. 00:41:27.070 [2024-10-07 14:51:50.679979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.070 [2024-10-07 14:51:50.680096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.070 [2024-10-07 14:51:50.680118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.070 [2024-10-07 14:51:50.680129] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.070 [2024-10-07 14:51:50.680139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.070 [2024-10-07 14:51:50.680160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.070 qpair failed and we were unable to recover it. 00:41:27.070 [2024-10-07 14:51:50.689991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.070 [2024-10-07 14:51:50.690084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.070 [2024-10-07 14:51:50.690105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.070 [2024-10-07 14:51:50.690117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.070 [2024-10-07 14:51:50.690126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.070 [2024-10-07 14:51:50.690147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.070 qpair failed and we were unable to recover it. 00:41:27.070 [2024-10-07 14:51:50.700024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.070 [2024-10-07 14:51:50.700100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.070 [2024-10-07 14:51:50.700123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.070 [2024-10-07 14:51:50.700135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.070 [2024-10-07 14:51:50.700144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.070 [2024-10-07 14:51:50.700166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.070 qpair failed and we were unable to recover it. 00:41:27.070 [2024-10-07 14:51:50.710045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.070 [2024-10-07 14:51:50.710122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.070 [2024-10-07 14:51:50.710144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.070 [2024-10-07 14:51:50.710156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.070 [2024-10-07 14:51:50.710164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.070 [2024-10-07 14:51:50.710185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.070 qpair failed and we were unable to recover it. 00:41:27.070 [2024-10-07 14:51:50.720004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.070 [2024-10-07 14:51:50.720080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.070 [2024-10-07 14:51:50.720102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.070 [2024-10-07 14:51:50.720114] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.070 [2024-10-07 14:51:50.720123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.070 [2024-10-07 14:51:50.720147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.070 qpair failed and we were unable to recover it. 00:41:27.070 [2024-10-07 14:51:50.730086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.070 [2024-10-07 14:51:50.730164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.070 [2024-10-07 14:51:50.730186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.071 [2024-10-07 14:51:50.730198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.071 [2024-10-07 14:51:50.730207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.071 [2024-10-07 14:51:50.730229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.071 qpair failed and we were unable to recover it. 00:41:27.071 [2024-10-07 14:51:50.740067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.071 [2024-10-07 14:51:50.740139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.071 [2024-10-07 14:51:50.740161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.071 [2024-10-07 14:51:50.740172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.071 [2024-10-07 14:51:50.740181] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.071 [2024-10-07 14:51:50.740207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.071 qpair failed and we were unable to recover it. 00:41:27.071 [2024-10-07 14:51:50.750136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.071 [2024-10-07 14:51:50.750210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.071 [2024-10-07 14:51:50.750232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.071 [2024-10-07 14:51:50.750244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.071 [2024-10-07 14:51:50.750253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.071 [2024-10-07 14:51:50.750275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.071 qpair failed and we were unable to recover it. 00:41:27.071 [2024-10-07 14:51:50.760187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.071 [2024-10-07 14:51:50.760275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.071 [2024-10-07 14:51:50.760296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.071 [2024-10-07 14:51:50.760309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.071 [2024-10-07 14:51:50.760317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.071 [2024-10-07 14:51:50.760339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.071 qpair failed and we were unable to recover it. 00:41:27.071 [2024-10-07 14:51:50.770183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.071 [2024-10-07 14:51:50.770253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.071 [2024-10-07 14:51:50.770274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.071 [2024-10-07 14:51:50.770286] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.071 [2024-10-07 14:51:50.770296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.071 [2024-10-07 14:51:50.770318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.071 qpair failed and we were unable to recover it. 00:41:27.332 [2024-10-07 14:51:50.780223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.332 [2024-10-07 14:51:50.780294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.332 [2024-10-07 14:51:50.780315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.332 [2024-10-07 14:51:50.780327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.332 [2024-10-07 14:51:50.780337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.332 [2024-10-07 14:51:50.780358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.332 qpair failed and we were unable to recover it. 00:41:27.332 [2024-10-07 14:51:50.790258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.332 [2024-10-07 14:51:50.790338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.332 [2024-10-07 14:51:50.790363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.332 [2024-10-07 14:51:50.790375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.332 [2024-10-07 14:51:50.790385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.332 [2024-10-07 14:51:50.790406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.332 qpair failed and we were unable to recover it. 00:41:27.332 [2024-10-07 14:51:50.800309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.332 [2024-10-07 14:51:50.800382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.332 [2024-10-07 14:51:50.800403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.332 [2024-10-07 14:51:50.800415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.332 [2024-10-07 14:51:50.800425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.332 [2024-10-07 14:51:50.800449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.332 qpair failed and we were unable to recover it. 00:41:27.332 [2024-10-07 14:51:50.810303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.332 [2024-10-07 14:51:50.810381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.332 [2024-10-07 14:51:50.810404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.332 [2024-10-07 14:51:50.810416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.332 [2024-10-07 14:51:50.810426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.332 [2024-10-07 14:51:50.810448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.332 qpair failed and we were unable to recover it. 00:41:27.332 [2024-10-07 14:51:50.820350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.332 [2024-10-07 14:51:50.820422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.332 [2024-10-07 14:51:50.820443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.332 [2024-10-07 14:51:50.820455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.332 [2024-10-07 14:51:50.820465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.332 [2024-10-07 14:51:50.820487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.332 qpair failed and we were unable to recover it. 00:41:27.332 [2024-10-07 14:51:50.830374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.332 [2024-10-07 14:51:50.830497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.332 [2024-10-07 14:51:50.830519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.332 [2024-10-07 14:51:50.830531] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.332 [2024-10-07 14:51:50.830545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.332 [2024-10-07 14:51:50.830566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.332 qpair failed and we were unable to recover it. 00:41:27.332 [2024-10-07 14:51:50.840337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.332 [2024-10-07 14:51:50.840438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.332 [2024-10-07 14:51:50.840460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.332 [2024-10-07 14:51:50.840471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.332 [2024-10-07 14:51:50.840481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.332 [2024-10-07 14:51:50.840506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.332 qpair failed and we were unable to recover it. 00:41:27.332 [2024-10-07 14:51:50.850447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.333 [2024-10-07 14:51:50.850521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.333 [2024-10-07 14:51:50.850542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.333 [2024-10-07 14:51:50.850553] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.333 [2024-10-07 14:51:50.850563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.333 [2024-10-07 14:51:50.850584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.333 qpair failed and we were unable to recover it. 00:41:27.333 [2024-10-07 14:51:50.860463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.333 [2024-10-07 14:51:50.860551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.333 [2024-10-07 14:51:50.860574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.333 [2024-10-07 14:51:50.860586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.333 [2024-10-07 14:51:50.860595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.333 [2024-10-07 14:51:50.860616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.333 qpair failed and we were unable to recover it. 00:41:27.333 [2024-10-07 14:51:50.870501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.333 [2024-10-07 14:51:50.870573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.333 [2024-10-07 14:51:50.870594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.333 [2024-10-07 14:51:50.870606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.333 [2024-10-07 14:51:50.870616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.333 [2024-10-07 14:51:50.870643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.333 qpair failed and we were unable to recover it. 00:41:27.333 [2024-10-07 14:51:50.880541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.333 [2024-10-07 14:51:50.880621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.333 [2024-10-07 14:51:50.880642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.333 [2024-10-07 14:51:50.880654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.333 [2024-10-07 14:51:50.880665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.333 [2024-10-07 14:51:50.880686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.333 qpair failed and we were unable to recover it. 00:41:27.333 [2024-10-07 14:51:50.890519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.333 [2024-10-07 14:51:50.890596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.333 [2024-10-07 14:51:50.890617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.333 [2024-10-07 14:51:50.890629] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.333 [2024-10-07 14:51:50.890639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.333 [2024-10-07 14:51:50.890660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.333 qpair failed and we were unable to recover it. 00:41:27.333 [2024-10-07 14:51:50.900570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.333 [2024-10-07 14:51:50.900676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.333 [2024-10-07 14:51:50.900698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.333 [2024-10-07 14:51:50.900709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.333 [2024-10-07 14:51:50.900718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.333 [2024-10-07 14:51:50.900739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.333 qpair failed and we were unable to recover it. 00:41:27.333 [2024-10-07 14:51:50.910603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.333 [2024-10-07 14:51:50.910678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.333 [2024-10-07 14:51:50.910699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.333 [2024-10-07 14:51:50.910711] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.333 [2024-10-07 14:51:50.910720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.333 [2024-10-07 14:51:50.910741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.333 qpair failed and we were unable to recover it. 00:41:27.333 [2024-10-07 14:51:50.920631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.333 [2024-10-07 14:51:50.920705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.333 [2024-10-07 14:51:50.920727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.333 [2024-10-07 14:51:50.920738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.333 [2024-10-07 14:51:50.920752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.333 [2024-10-07 14:51:50.920773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.333 qpair failed and we were unable to recover it. 00:41:27.333 [2024-10-07 14:51:50.930631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.333 [2024-10-07 14:51:50.930708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.333 [2024-10-07 14:51:50.930729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.333 [2024-10-07 14:51:50.930740] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.333 [2024-10-07 14:51:50.930750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.333 [2024-10-07 14:51:50.930772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.333 qpair failed and we were unable to recover it. 00:41:27.333 [2024-10-07 14:51:50.940712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.333 [2024-10-07 14:51:50.940785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.333 [2024-10-07 14:51:50.940807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.333 [2024-10-07 14:51:50.940818] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.333 [2024-10-07 14:51:50.940828] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.333 [2024-10-07 14:51:50.940850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.333 qpair failed and we were unable to recover it. 00:41:27.333 [2024-10-07 14:51:50.950490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.333 [2024-10-07 14:51:50.950557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.333 [2024-10-07 14:51:50.950578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.333 [2024-10-07 14:51:50.950590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.333 [2024-10-07 14:51:50.950600] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.333 [2024-10-07 14:51:50.950621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.333 qpair failed and we were unable to recover it. 00:41:27.333 [2024-10-07 14:51:50.960693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.333 [2024-10-07 14:51:50.960770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.333 [2024-10-07 14:51:50.960792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.333 [2024-10-07 14:51:50.960803] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.333 [2024-10-07 14:51:50.960813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.333 [2024-10-07 14:51:50.960834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.333 qpair failed and we were unable to recover it. 00:41:27.333 [2024-10-07 14:51:50.970594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.333 [2024-10-07 14:51:50.970661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.333 [2024-10-07 14:51:50.970682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.333 [2024-10-07 14:51:50.970693] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.333 [2024-10-07 14:51:50.970703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.333 [2024-10-07 14:51:50.970725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.333 qpair failed and we were unable to recover it. 00:41:27.333 [2024-10-07 14:51:50.980876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.333 [2024-10-07 14:51:50.980948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.333 [2024-10-07 14:51:50.980970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.333 [2024-10-07 14:51:50.980981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.333 [2024-10-07 14:51:50.980991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.333 [2024-10-07 14:51:50.981020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.333 qpair failed and we were unable to recover it. 00:41:27.333 [2024-10-07 14:51:50.990656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.334 [2024-10-07 14:51:50.990722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.334 [2024-10-07 14:51:50.990743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.334 [2024-10-07 14:51:50.990754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.334 [2024-10-07 14:51:50.990763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.334 [2024-10-07 14:51:50.990784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.334 qpair failed and we were unable to recover it. 00:41:27.334 [2024-10-07 14:51:51.000822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.334 [2024-10-07 14:51:51.000899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.334 [2024-10-07 14:51:51.000921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.334 [2024-10-07 14:51:51.000932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.334 [2024-10-07 14:51:51.000942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.334 [2024-10-07 14:51:51.000962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.334 qpair failed and we were unable to recover it. 00:41:27.334 [2024-10-07 14:51:51.010708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.334 [2024-10-07 14:51:51.010779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.334 [2024-10-07 14:51:51.010801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.334 [2024-10-07 14:51:51.010815] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.334 [2024-10-07 14:51:51.010825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.334 [2024-10-07 14:51:51.010847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.334 qpair failed and we were unable to recover it. 00:41:27.334 [2024-10-07 14:51:51.020932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.334 [2024-10-07 14:51:51.021017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.334 [2024-10-07 14:51:51.021038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.334 [2024-10-07 14:51:51.021050] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.334 [2024-10-07 14:51:51.021060] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.334 [2024-10-07 14:51:51.021082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.334 qpair failed and we were unable to recover it. 00:41:27.334 [2024-10-07 14:51:51.030791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.334 [2024-10-07 14:51:51.030860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.334 [2024-10-07 14:51:51.030882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.334 [2024-10-07 14:51:51.030893] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.334 [2024-10-07 14:51:51.030903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.334 [2024-10-07 14:51:51.030924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.334 qpair failed and we were unable to recover it. 00:41:27.595 [2024-10-07 14:51:51.040971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.595 [2024-10-07 14:51:51.041050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.595 [2024-10-07 14:51:51.041072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.595 [2024-10-07 14:51:51.041085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.595 [2024-10-07 14:51:51.041094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.595 [2024-10-07 14:51:51.041115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.595 qpair failed and we were unable to recover it. 00:41:27.595 [2024-10-07 14:51:51.050801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.595 [2024-10-07 14:51:51.050865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.595 [2024-10-07 14:51:51.050886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.595 [2024-10-07 14:51:51.050898] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.595 [2024-10-07 14:51:51.050907] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.595 [2024-10-07 14:51:51.050929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.595 qpair failed and we were unable to recover it. 00:41:27.595 [2024-10-07 14:51:51.061089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.595 [2024-10-07 14:51:51.061165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.595 [2024-10-07 14:51:51.061186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.595 [2024-10-07 14:51:51.061197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.595 [2024-10-07 14:51:51.061207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.595 [2024-10-07 14:51:51.061229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.595 qpair failed and we were unable to recover it. 00:41:27.595 [2024-10-07 14:51:51.070995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.595 [2024-10-07 14:51:51.071064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.595 [2024-10-07 14:51:51.071085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.595 [2024-10-07 14:51:51.071097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.595 [2024-10-07 14:51:51.071106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.595 [2024-10-07 14:51:51.071128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.595 qpair failed and we were unable to recover it. 00:41:27.595 [2024-10-07 14:51:51.080986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.595 [2024-10-07 14:51:51.081097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.595 [2024-10-07 14:51:51.081119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.595 [2024-10-07 14:51:51.081131] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.595 [2024-10-07 14:51:51.081141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.595 [2024-10-07 14:51:51.081162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.595 qpair failed and we were unable to recover it. 00:41:27.595 [2024-10-07 14:51:51.090934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.595 [2024-10-07 14:51:51.091007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.595 [2024-10-07 14:51:51.091029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.595 [2024-10-07 14:51:51.091040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.595 [2024-10-07 14:51:51.091050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.595 [2024-10-07 14:51:51.091072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.595 qpair failed and we were unable to recover it. 00:41:27.595 [2024-10-07 14:51:51.101301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.595 [2024-10-07 14:51:51.101403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.595 [2024-10-07 14:51:51.101425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.596 [2024-10-07 14:51:51.101440] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.596 [2024-10-07 14:51:51.101449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.596 [2024-10-07 14:51:51.101470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.596 qpair failed and we were unable to recover it. 00:41:27.596 [2024-10-07 14:51:51.110948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.596 [2024-10-07 14:51:51.111021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.596 [2024-10-07 14:51:51.111043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.596 [2024-10-07 14:51:51.111055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.596 [2024-10-07 14:51:51.111065] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.596 [2024-10-07 14:51:51.111087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.596 qpair failed and we were unable to recover it. 00:41:27.596 [2024-10-07 14:51:51.121125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.596 [2024-10-07 14:51:51.121225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.596 [2024-10-07 14:51:51.121247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.596 [2024-10-07 14:51:51.121259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.596 [2024-10-07 14:51:51.121268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.596 [2024-10-07 14:51:51.121289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.596 qpair failed and we were unable to recover it. 00:41:27.596 [2024-10-07 14:51:51.131014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.596 [2024-10-07 14:51:51.131079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.596 [2024-10-07 14:51:51.131100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.596 [2024-10-07 14:51:51.131117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.596 [2024-10-07 14:51:51.131127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.596 [2024-10-07 14:51:51.131149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.596 qpair failed and we were unable to recover it. 00:41:27.596 [2024-10-07 14:51:51.141257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.596 [2024-10-07 14:51:51.141331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.596 [2024-10-07 14:51:51.141352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.596 [2024-10-07 14:51:51.141363] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.596 [2024-10-07 14:51:51.141373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.596 [2024-10-07 14:51:51.141394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.596 qpair failed and we were unable to recover it. 00:41:27.596 [2024-10-07 14:51:51.151071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.596 [2024-10-07 14:51:51.151140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.596 [2024-10-07 14:51:51.151162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.596 [2024-10-07 14:51:51.151173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.596 [2024-10-07 14:51:51.151182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.596 [2024-10-07 14:51:51.151203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.596 qpair failed and we were unable to recover it. 00:41:27.596 [2024-10-07 14:51:51.161299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.596 [2024-10-07 14:51:51.161372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.596 [2024-10-07 14:51:51.161393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.596 [2024-10-07 14:51:51.161405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.596 [2024-10-07 14:51:51.161415] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.596 [2024-10-07 14:51:51.161436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.596 qpair failed and we were unable to recover it. 00:41:27.596 [2024-10-07 14:51:51.171189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.596 [2024-10-07 14:51:51.171259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.596 [2024-10-07 14:51:51.171280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.596 [2024-10-07 14:51:51.171292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.596 [2024-10-07 14:51:51.171302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.596 [2024-10-07 14:51:51.171327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.596 qpair failed and we were unable to recover it. 00:41:27.596 [2024-10-07 14:51:51.181361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.596 [2024-10-07 14:51:51.181453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.596 [2024-10-07 14:51:51.181475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.596 [2024-10-07 14:51:51.181487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.596 [2024-10-07 14:51:51.181496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.596 [2024-10-07 14:51:51.181518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.596 qpair failed and we were unable to recover it. 00:41:27.596 [2024-10-07 14:51:51.191102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.596 [2024-10-07 14:51:51.191169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.596 [2024-10-07 14:51:51.191193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.596 [2024-10-07 14:51:51.191205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.596 [2024-10-07 14:51:51.191215] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.596 [2024-10-07 14:51:51.191236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.596 qpair failed and we were unable to recover it. 00:41:27.596 [2024-10-07 14:51:51.201407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.596 [2024-10-07 14:51:51.201481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.596 [2024-10-07 14:51:51.201502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.596 [2024-10-07 14:51:51.201514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.596 [2024-10-07 14:51:51.201524] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.596 [2024-10-07 14:51:51.201546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.596 qpair failed and we were unable to recover it. 00:41:27.596 [2024-10-07 14:51:51.211172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.596 [2024-10-07 14:51:51.211239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.596 [2024-10-07 14:51:51.211260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.596 [2024-10-07 14:51:51.211271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.596 [2024-10-07 14:51:51.211280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.596 [2024-10-07 14:51:51.211302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.596 qpair failed and we were unable to recover it. 00:41:27.596 [2024-10-07 14:51:51.221466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.596 [2024-10-07 14:51:51.221552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.596 [2024-10-07 14:51:51.221573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.596 [2024-10-07 14:51:51.221585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.596 [2024-10-07 14:51:51.221594] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.596 [2024-10-07 14:51:51.221616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.596 qpair failed and we were unable to recover it. 00:41:27.596 [2024-10-07 14:51:51.231311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.596 [2024-10-07 14:51:51.231379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.596 [2024-10-07 14:51:51.231400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.596 [2024-10-07 14:51:51.231411] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.596 [2024-10-07 14:51:51.231420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.596 [2024-10-07 14:51:51.231445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.596 qpair failed and we were unable to recover it. 00:41:27.596 [2024-10-07 14:51:51.241528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.596 [2024-10-07 14:51:51.241603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.596 [2024-10-07 14:51:51.241625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.596 [2024-10-07 14:51:51.241636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.596 [2024-10-07 14:51:51.241645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.596 [2024-10-07 14:51:51.241666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.596 qpair failed and we were unable to recover it. 00:41:27.596 [2024-10-07 14:51:51.251349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.596 [2024-10-07 14:51:51.251418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.596 [2024-10-07 14:51:51.251439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.596 [2024-10-07 14:51:51.251451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.596 [2024-10-07 14:51:51.251461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.596 [2024-10-07 14:51:51.251482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.597 qpair failed and we were unable to recover it. 00:41:27.597 [2024-10-07 14:51:51.261592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.597 [2024-10-07 14:51:51.261701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.597 [2024-10-07 14:51:51.261724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.597 [2024-10-07 14:51:51.261735] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.597 [2024-10-07 14:51:51.261744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.597 [2024-10-07 14:51:51.261765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.597 qpair failed and we were unable to recover it. 00:41:27.597 [2024-10-07 14:51:51.271438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.597 [2024-10-07 14:51:51.271505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.597 [2024-10-07 14:51:51.271526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.597 [2024-10-07 14:51:51.271538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.597 [2024-10-07 14:51:51.271547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.597 [2024-10-07 14:51:51.271568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.597 qpair failed and we were unable to recover it. 00:41:27.597 [2024-10-07 14:51:51.281640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.597 [2024-10-07 14:51:51.281724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.597 [2024-10-07 14:51:51.281749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.597 [2024-10-07 14:51:51.281761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.597 [2024-10-07 14:51:51.281770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.597 [2024-10-07 14:51:51.281791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.597 qpair failed and we were unable to recover it. 00:41:27.597 [2024-10-07 14:51:51.291589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.597 [2024-10-07 14:51:51.291710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.597 [2024-10-07 14:51:51.291732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.597 [2024-10-07 14:51:51.291743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.597 [2024-10-07 14:51:51.291753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.597 [2024-10-07 14:51:51.291773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.597 qpair failed and we were unable to recover it. 00:41:27.597 [2024-10-07 14:51:51.301707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.597 [2024-10-07 14:51:51.301778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.597 [2024-10-07 14:51:51.301798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.597 [2024-10-07 14:51:51.301810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.597 [2024-10-07 14:51:51.301819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.597 [2024-10-07 14:51:51.301840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.597 qpair failed and we were unable to recover it. 00:41:27.858 [2024-10-07 14:51:51.311484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.858 [2024-10-07 14:51:51.311551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.858 [2024-10-07 14:51:51.311572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.858 [2024-10-07 14:51:51.311584] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.858 [2024-10-07 14:51:51.311593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.858 [2024-10-07 14:51:51.311615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.858 qpair failed and we were unable to recover it. 00:41:27.858 [2024-10-07 14:51:51.321761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.858 [2024-10-07 14:51:51.321833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.858 [2024-10-07 14:51:51.321854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.858 [2024-10-07 14:51:51.321865] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.858 [2024-10-07 14:51:51.321878] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.858 [2024-10-07 14:51:51.321899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.858 qpair failed and we were unable to recover it. 00:41:27.858 [2024-10-07 14:51:51.331587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.858 [2024-10-07 14:51:51.331654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.858 [2024-10-07 14:51:51.331675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.858 [2024-10-07 14:51:51.331686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.858 [2024-10-07 14:51:51.331695] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.858 [2024-10-07 14:51:51.331717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.858 qpair failed and we were unable to recover it. 00:41:27.858 [2024-10-07 14:51:51.341800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.858 [2024-10-07 14:51:51.341885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.858 [2024-10-07 14:51:51.341906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.858 [2024-10-07 14:51:51.341918] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.858 [2024-10-07 14:51:51.341927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.858 [2024-10-07 14:51:51.341948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.858 qpair failed and we were unable to recover it. 00:41:27.858 [2024-10-07 14:51:51.351517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.858 [2024-10-07 14:51:51.351583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.858 [2024-10-07 14:51:51.351604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.858 [2024-10-07 14:51:51.351615] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.858 [2024-10-07 14:51:51.351624] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.858 [2024-10-07 14:51:51.351647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.858 qpair failed and we were unable to recover it. 00:41:27.858 [2024-10-07 14:51:51.361853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.858 [2024-10-07 14:51:51.361944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.858 [2024-10-07 14:51:51.361966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.858 [2024-10-07 14:51:51.361978] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.858 [2024-10-07 14:51:51.361987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.859 [2024-10-07 14:51:51.362018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.859 qpair failed and we were unable to recover it. 00:41:27.859 [2024-10-07 14:51:51.371671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.859 [2024-10-07 14:51:51.371775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.859 [2024-10-07 14:51:51.371798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.859 [2024-10-07 14:51:51.371809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.859 [2024-10-07 14:51:51.371819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.859 [2024-10-07 14:51:51.371840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.859 qpair failed and we were unable to recover it. 00:41:27.859 [2024-10-07 14:51:51.381902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.859 [2024-10-07 14:51:51.381972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.859 [2024-10-07 14:51:51.381993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.859 [2024-10-07 14:51:51.382011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.859 [2024-10-07 14:51:51.382020] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.859 [2024-10-07 14:51:51.382042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.859 qpair failed and we were unable to recover it. 00:41:27.859 [2024-10-07 14:51:51.391758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.859 [2024-10-07 14:51:51.391853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.859 [2024-10-07 14:51:51.391875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.859 [2024-10-07 14:51:51.391886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.859 [2024-10-07 14:51:51.391895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.859 [2024-10-07 14:51:51.391916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.859 qpair failed and we were unable to recover it. 00:41:27.859 [2024-10-07 14:51:51.401967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.859 [2024-10-07 14:51:51.402057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.859 [2024-10-07 14:51:51.402079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.859 [2024-10-07 14:51:51.402091] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.859 [2024-10-07 14:51:51.402099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.859 [2024-10-07 14:51:51.402121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.859 qpair failed and we were unable to recover it. 00:41:27.859 [2024-10-07 14:51:51.411797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.859 [2024-10-07 14:51:51.411896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.859 [2024-10-07 14:51:51.411918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.859 [2024-10-07 14:51:51.411929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.859 [2024-10-07 14:51:51.411941] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.859 [2024-10-07 14:51:51.411963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.859 qpair failed and we were unable to recover it. 00:41:27.859 [2024-10-07 14:51:51.422048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.859 [2024-10-07 14:51:51.422120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.859 [2024-10-07 14:51:51.422141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.859 [2024-10-07 14:51:51.422153] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.859 [2024-10-07 14:51:51.422162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.859 [2024-10-07 14:51:51.422183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.859 qpair failed and we were unable to recover it. 00:41:27.859 [2024-10-07 14:51:51.431829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.859 [2024-10-07 14:51:51.431895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.859 [2024-10-07 14:51:51.431917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.859 [2024-10-07 14:51:51.431928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.859 [2024-10-07 14:51:51.431938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.859 [2024-10-07 14:51:51.431959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.859 qpair failed and we were unable to recover it. 00:41:27.859 [2024-10-07 14:51:51.442079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.859 [2024-10-07 14:51:51.442191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.859 [2024-10-07 14:51:51.442213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.859 [2024-10-07 14:51:51.442224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.859 [2024-10-07 14:51:51.442233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.859 [2024-10-07 14:51:51.442255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.859 qpair failed and we were unable to recover it. 00:41:27.859 [2024-10-07 14:51:51.451891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.859 [2024-10-07 14:51:51.451964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.859 [2024-10-07 14:51:51.451986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.859 [2024-10-07 14:51:51.451997] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.859 [2024-10-07 14:51:51.452022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.859 [2024-10-07 14:51:51.452043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.859 qpair failed and we were unable to recover it. 00:41:27.859 [2024-10-07 14:51:51.462171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.859 [2024-10-07 14:51:51.462248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.859 [2024-10-07 14:51:51.462269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.859 [2024-10-07 14:51:51.462281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.859 [2024-10-07 14:51:51.462290] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.859 [2024-10-07 14:51:51.462312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.859 qpair failed and we were unable to recover it. 00:41:27.859 [2024-10-07 14:51:51.471967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.859 [2024-10-07 14:51:51.472048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.859 [2024-10-07 14:51:51.472070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.859 [2024-10-07 14:51:51.472082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.859 [2024-10-07 14:51:51.472091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.859 [2024-10-07 14:51:51.472114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.859 qpair failed and we were unable to recover it. 00:41:27.859 [2024-10-07 14:51:51.482200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.859 [2024-10-07 14:51:51.482302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.859 [2024-10-07 14:51:51.482323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.859 [2024-10-07 14:51:51.482335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.859 [2024-10-07 14:51:51.482344] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.859 [2024-10-07 14:51:51.482366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.859 qpair failed and we were unable to recover it. 00:41:27.859 [2024-10-07 14:51:51.492059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.859 [2024-10-07 14:51:51.492167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.859 [2024-10-07 14:51:51.492188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.859 [2024-10-07 14:51:51.492199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.859 [2024-10-07 14:51:51.492209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.859 [2024-10-07 14:51:51.492230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.859 qpair failed and we were unable to recover it. 00:41:27.859 [2024-10-07 14:51:51.502217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.859 [2024-10-07 14:51:51.502289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.859 [2024-10-07 14:51:51.502310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.859 [2024-10-07 14:51:51.502325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.860 [2024-10-07 14:51:51.502334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.860 [2024-10-07 14:51:51.502359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.860 qpair failed and we were unable to recover it. 00:41:27.860 [2024-10-07 14:51:51.512115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.860 [2024-10-07 14:51:51.512187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.860 [2024-10-07 14:51:51.512208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.860 [2024-10-07 14:51:51.512219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.860 [2024-10-07 14:51:51.512228] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.860 [2024-10-07 14:51:51.512249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.860 qpair failed and we were unable to recover it. 00:41:27.860 [2024-10-07 14:51:51.522341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.860 [2024-10-07 14:51:51.522419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.860 [2024-10-07 14:51:51.522439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.860 [2024-10-07 14:51:51.522451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.860 [2024-10-07 14:51:51.522460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.860 [2024-10-07 14:51:51.522482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.860 qpair failed and we were unable to recover it. 00:41:27.860 [2024-10-07 14:51:51.532115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.860 [2024-10-07 14:51:51.532184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.860 [2024-10-07 14:51:51.532205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.860 [2024-10-07 14:51:51.532217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.860 [2024-10-07 14:51:51.532226] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.860 [2024-10-07 14:51:51.532247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.860 qpair failed and we were unable to recover it. 00:41:27.860 [2024-10-07 14:51:51.542173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.860 [2024-10-07 14:51:51.542237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.860 [2024-10-07 14:51:51.542258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.860 [2024-10-07 14:51:51.542270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.860 [2024-10-07 14:51:51.542279] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.860 [2024-10-07 14:51:51.542300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.860 qpair failed and we were unable to recover it. 00:41:27.860 [2024-10-07 14:51:51.552158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.860 [2024-10-07 14:51:51.552258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.860 [2024-10-07 14:51:51.552280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.860 [2024-10-07 14:51:51.552292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.860 [2024-10-07 14:51:51.552302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.860 [2024-10-07 14:51:51.552323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.860 qpair failed and we were unable to recover it. 00:41:27.860 [2024-10-07 14:51:51.562408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:27.860 [2024-10-07 14:51:51.562483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:27.860 [2024-10-07 14:51:51.562504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:27.860 [2024-10-07 14:51:51.562515] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:27.860 [2024-10-07 14:51:51.562525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:27.860 [2024-10-07 14:51:51.562547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:27.860 qpair failed and we were unable to recover it. 00:41:28.122 [2024-10-07 14:51:51.572282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.122 [2024-10-07 14:51:51.572348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.122 [2024-10-07 14:51:51.572370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.122 [2024-10-07 14:51:51.572381] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.122 [2024-10-07 14:51:51.572390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.122 [2024-10-07 14:51:51.572411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.122 qpair failed and we were unable to recover it. 00:41:28.122 [2024-10-07 14:51:51.582285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.122 [2024-10-07 14:51:51.582353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.122 [2024-10-07 14:51:51.582374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.122 [2024-10-07 14:51:51.582386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.122 [2024-10-07 14:51:51.582395] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.122 [2024-10-07 14:51:51.582416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.122 qpair failed and we were unable to recover it. 00:41:28.122 [2024-10-07 14:51:51.592210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.122 [2024-10-07 14:51:51.592293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.122 [2024-10-07 14:51:51.592314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.122 [2024-10-07 14:51:51.592330] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.122 [2024-10-07 14:51:51.592339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.122 [2024-10-07 14:51:51.592360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.122 qpair failed and we were unable to recover it. 00:41:28.122 [2024-10-07 14:51:51.602553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.122 [2024-10-07 14:51:51.602657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.122 [2024-10-07 14:51:51.602679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.122 [2024-10-07 14:51:51.602690] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.122 [2024-10-07 14:51:51.602699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.122 [2024-10-07 14:51:51.602720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.122 qpair failed and we were unable to recover it. 00:41:28.122 [2024-10-07 14:51:51.612324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.122 [2024-10-07 14:51:51.612392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.122 [2024-10-07 14:51:51.612414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.122 [2024-10-07 14:51:51.612425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.122 [2024-10-07 14:51:51.612435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.122 [2024-10-07 14:51:51.612456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.122 qpair failed and we were unable to recover it. 00:41:28.122 [2024-10-07 14:51:51.622370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.122 [2024-10-07 14:51:51.622438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.122 [2024-10-07 14:51:51.622459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.122 [2024-10-07 14:51:51.622470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.122 [2024-10-07 14:51:51.622479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.122 [2024-10-07 14:51:51.622500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.122 qpair failed and we were unable to recover it. 00:41:28.122 [2024-10-07 14:51:51.632398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.122 [2024-10-07 14:51:51.632469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.122 [2024-10-07 14:51:51.632490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.122 [2024-10-07 14:51:51.632501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.122 [2024-10-07 14:51:51.632511] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.122 [2024-10-07 14:51:51.632532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.122 qpair failed and we were unable to recover it. 00:41:28.122 [2024-10-07 14:51:51.642546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.122 [2024-10-07 14:51:51.642622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.122 [2024-10-07 14:51:51.642644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.122 [2024-10-07 14:51:51.642655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.122 [2024-10-07 14:51:51.642671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.122 [2024-10-07 14:51:51.642692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.122 qpair failed and we were unable to recover it. 00:41:28.122 [2024-10-07 14:51:51.652477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.122 [2024-10-07 14:51:51.652542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.122 [2024-10-07 14:51:51.652564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.122 [2024-10-07 14:51:51.652575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.122 [2024-10-07 14:51:51.652584] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.122 [2024-10-07 14:51:51.652605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.122 qpair failed and we were unable to recover it. 00:41:28.122 [2024-10-07 14:51:51.662503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.122 [2024-10-07 14:51:51.662570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.122 [2024-10-07 14:51:51.662592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.122 [2024-10-07 14:51:51.662603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.122 [2024-10-07 14:51:51.662612] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.122 [2024-10-07 14:51:51.662634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.122 qpair failed and we were unable to recover it. 00:41:28.122 [2024-10-07 14:51:51.672406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.122 [2024-10-07 14:51:51.672471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.122 [2024-10-07 14:51:51.672492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.122 [2024-10-07 14:51:51.672503] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.122 [2024-10-07 14:51:51.672512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.122 [2024-10-07 14:51:51.672533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.122 qpair failed and we were unable to recover it. 00:41:28.122 [2024-10-07 14:51:51.682731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.122 [2024-10-07 14:51:51.682806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.122 [2024-10-07 14:51:51.682830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.122 [2024-10-07 14:51:51.682841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.122 [2024-10-07 14:51:51.682851] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.122 [2024-10-07 14:51:51.682872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.122 qpair failed and we were unable to recover it. 00:41:28.122 [2024-10-07 14:51:51.692561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.123 [2024-10-07 14:51:51.692636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.123 [2024-10-07 14:51:51.692657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.123 [2024-10-07 14:51:51.692669] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.123 [2024-10-07 14:51:51.692678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.123 [2024-10-07 14:51:51.692700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.123 qpair failed and we were unable to recover it. 00:41:28.123 [2024-10-07 14:51:51.702595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.123 [2024-10-07 14:51:51.702681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.123 [2024-10-07 14:51:51.702701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.123 [2024-10-07 14:51:51.702714] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.123 [2024-10-07 14:51:51.702723] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.123 [2024-10-07 14:51:51.702744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.123 qpair failed and we were unable to recover it. 00:41:28.123 [2024-10-07 14:51:51.712575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.123 [2024-10-07 14:51:51.712643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.123 [2024-10-07 14:51:51.712664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.123 [2024-10-07 14:51:51.712676] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.123 [2024-10-07 14:51:51.712685] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.123 [2024-10-07 14:51:51.712707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.123 qpair failed and we were unable to recover it. 00:41:28.123 [2024-10-07 14:51:51.722852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.123 [2024-10-07 14:51:51.722925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.123 [2024-10-07 14:51:51.722947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.123 [2024-10-07 14:51:51.722958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.123 [2024-10-07 14:51:51.722968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.123 [2024-10-07 14:51:51.722993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.123 qpair failed and we were unable to recover it. 00:41:28.123 [2024-10-07 14:51:51.732664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.123 [2024-10-07 14:51:51.732730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.123 [2024-10-07 14:51:51.732751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.123 [2024-10-07 14:51:51.732762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.123 [2024-10-07 14:51:51.732771] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.123 [2024-10-07 14:51:51.732793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.123 qpair failed and we were unable to recover it. 00:41:28.123 [2024-10-07 14:51:51.742654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.123 [2024-10-07 14:51:51.742730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.123 [2024-10-07 14:51:51.742751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.123 [2024-10-07 14:51:51.742762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.123 [2024-10-07 14:51:51.742772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.123 [2024-10-07 14:51:51.742794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.123 qpair failed and we were unable to recover it. 00:41:28.123 [2024-10-07 14:51:51.752805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.123 [2024-10-07 14:51:51.752877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.123 [2024-10-07 14:51:51.752898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.123 [2024-10-07 14:51:51.752909] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.123 [2024-10-07 14:51:51.752920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.123 [2024-10-07 14:51:51.752941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.123 qpair failed and we were unable to recover it. 00:41:28.123 [2024-10-07 14:51:51.762967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.123 [2024-10-07 14:51:51.763054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.123 [2024-10-07 14:51:51.763076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.123 [2024-10-07 14:51:51.763089] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.123 [2024-10-07 14:51:51.763098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.123 [2024-10-07 14:51:51.763120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.123 qpair failed and we were unable to recover it. 00:41:28.123 [2024-10-07 14:51:51.772790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.123 [2024-10-07 14:51:51.772852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.123 [2024-10-07 14:51:51.772876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.123 [2024-10-07 14:51:51.772888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.123 [2024-10-07 14:51:51.772898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.123 [2024-10-07 14:51:51.772919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.123 qpair failed and we were unable to recover it. 00:41:28.123 [2024-10-07 14:51:51.782787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.123 [2024-10-07 14:51:51.782856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.123 [2024-10-07 14:51:51.782877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.123 [2024-10-07 14:51:51.782888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.123 [2024-10-07 14:51:51.782897] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.123 [2024-10-07 14:51:51.782919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.123 qpair failed and we were unable to recover it. 00:41:28.123 [2024-10-07 14:51:51.792807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.123 [2024-10-07 14:51:51.792886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.123 [2024-10-07 14:51:51.792912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.123 [2024-10-07 14:51:51.792925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.123 [2024-10-07 14:51:51.792935] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.123 [2024-10-07 14:51:51.792957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.123 qpair failed and we were unable to recover it. 00:41:28.123 [2024-10-07 14:51:51.803067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.123 [2024-10-07 14:51:51.803143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.123 [2024-10-07 14:51:51.803165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.123 [2024-10-07 14:51:51.803177] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.123 [2024-10-07 14:51:51.803186] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.123 [2024-10-07 14:51:51.803208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.123 qpair failed and we were unable to recover it. 00:41:28.123 [2024-10-07 14:51:51.812918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.123 [2024-10-07 14:51:51.812985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.123 [2024-10-07 14:51:51.813013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.123 [2024-10-07 14:51:51.813025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.123 [2024-10-07 14:51:51.813035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.123 [2024-10-07 14:51:51.813061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.123 qpair failed and we were unable to recover it. 00:41:28.123 [2024-10-07 14:51:51.822855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.123 [2024-10-07 14:51:51.822923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.123 [2024-10-07 14:51:51.822945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.123 [2024-10-07 14:51:51.822956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.123 [2024-10-07 14:51:51.822967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.123 [2024-10-07 14:51:51.822989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.123 qpair failed and we were unable to recover it. 00:41:28.387 [2024-10-07 14:51:51.832948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.387 [2024-10-07 14:51:51.833022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.387 [2024-10-07 14:51:51.833044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.387 [2024-10-07 14:51:51.833055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.387 [2024-10-07 14:51:51.833065] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.387 [2024-10-07 14:51:51.833105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.387 qpair failed and we were unable to recover it. 00:41:28.387 [2024-10-07 14:51:51.843170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.387 [2024-10-07 14:51:51.843246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.387 [2024-10-07 14:51:51.843267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.387 [2024-10-07 14:51:51.843278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.387 [2024-10-07 14:51:51.843289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.387 [2024-10-07 14:51:51.843310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.387 qpair failed and we were unable to recover it. 00:41:28.387 [2024-10-07 14:51:51.852913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.387 [2024-10-07 14:51:51.852978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.387 [2024-10-07 14:51:51.853009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.387 [2024-10-07 14:51:51.853021] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.387 [2024-10-07 14:51:51.853031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.387 [2024-10-07 14:51:51.853052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.387 qpair failed and we were unable to recover it. 00:41:28.387 [2024-10-07 14:51:51.863009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.387 [2024-10-07 14:51:51.863079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.387 [2024-10-07 14:51:51.863101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.387 [2024-10-07 14:51:51.863112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.387 [2024-10-07 14:51:51.863121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.387 [2024-10-07 14:51:51.863143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.387 qpair failed and we were unable to recover it. 00:41:28.387 [2024-10-07 14:51:51.873102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.387 [2024-10-07 14:51:51.873178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.387 [2024-10-07 14:51:51.873200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.387 [2024-10-07 14:51:51.873211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.387 [2024-10-07 14:51:51.873220] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.387 [2024-10-07 14:51:51.873242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.387 qpair failed and we were unable to recover it. 00:41:28.387 [2024-10-07 14:51:51.883280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.387 [2024-10-07 14:51:51.883363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.387 [2024-10-07 14:51:51.883384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.387 [2024-10-07 14:51:51.883397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.387 [2024-10-07 14:51:51.883406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.387 [2024-10-07 14:51:51.883427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.387 qpair failed and we were unable to recover it. 00:41:28.387 [2024-10-07 14:51:51.893146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.387 [2024-10-07 14:51:51.893215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.387 [2024-10-07 14:51:51.893236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.387 [2024-10-07 14:51:51.893247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.387 [2024-10-07 14:51:51.893257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.387 [2024-10-07 14:51:51.893280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.387 qpair failed and we were unable to recover it. 00:41:28.387 [2024-10-07 14:51:51.903184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.387 [2024-10-07 14:51:51.903261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.387 [2024-10-07 14:51:51.903287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.387 [2024-10-07 14:51:51.903299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.387 [2024-10-07 14:51:51.903312] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.387 [2024-10-07 14:51:51.903334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.387 qpair failed and we were unable to recover it. 00:41:28.387 [2024-10-07 14:51:51.913191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.387 [2024-10-07 14:51:51.913272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.387 [2024-10-07 14:51:51.913293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.387 [2024-10-07 14:51:51.913305] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.387 [2024-10-07 14:51:51.913314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.387 [2024-10-07 14:51:51.913336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.387 qpair failed and we were unable to recover it. 00:41:28.387 [2024-10-07 14:51:51.923346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.387 [2024-10-07 14:51:51.923437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.387 [2024-10-07 14:51:51.923460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.387 [2024-10-07 14:51:51.923471] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.387 [2024-10-07 14:51:51.923481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.387 [2024-10-07 14:51:51.923501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.387 qpair failed and we were unable to recover it. 00:41:28.387 [2024-10-07 14:51:51.933178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.387 [2024-10-07 14:51:51.933259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.387 [2024-10-07 14:51:51.933281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.387 [2024-10-07 14:51:51.933293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.387 [2024-10-07 14:51:51.933302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.387 [2024-10-07 14:51:51.933325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.387 qpair failed and we were unable to recover it. 00:41:28.387 [2024-10-07 14:51:51.943259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.387 [2024-10-07 14:51:51.943322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.387 [2024-10-07 14:51:51.943343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.387 [2024-10-07 14:51:51.943354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.388 [2024-10-07 14:51:51.943363] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.388 [2024-10-07 14:51:51.943386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.388 qpair failed and we were unable to recover it. 00:41:28.388 [2024-10-07 14:51:51.953280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.388 [2024-10-07 14:51:51.953371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.388 [2024-10-07 14:51:51.953393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.388 [2024-10-07 14:51:51.953405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.388 [2024-10-07 14:51:51.953414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.388 [2024-10-07 14:51:51.953436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.388 qpair failed and we were unable to recover it. 00:41:28.388 [2024-10-07 14:51:51.963537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.388 [2024-10-07 14:51:51.963610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.388 [2024-10-07 14:51:51.963631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.388 [2024-10-07 14:51:51.963643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.388 [2024-10-07 14:51:51.963653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.388 [2024-10-07 14:51:51.963674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.388 qpair failed and we were unable to recover it. 00:41:28.388 [2024-10-07 14:51:51.973363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.388 [2024-10-07 14:51:51.973446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.388 [2024-10-07 14:51:51.973467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.388 [2024-10-07 14:51:51.973479] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.388 [2024-10-07 14:51:51.973488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.388 [2024-10-07 14:51:51.973509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.388 qpair failed and we were unable to recover it. 00:41:28.388 [2024-10-07 14:51:51.983380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.388 [2024-10-07 14:51:51.983446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.388 [2024-10-07 14:51:51.983467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.388 [2024-10-07 14:51:51.983478] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.388 [2024-10-07 14:51:51.983487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.388 [2024-10-07 14:51:51.983508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.388 qpair failed and we were unable to recover it. 00:41:28.388 [2024-10-07 14:51:51.993432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.388 [2024-10-07 14:51:51.993494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.388 [2024-10-07 14:51:51.993515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.388 [2024-10-07 14:51:51.993529] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.388 [2024-10-07 14:51:51.993539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.388 [2024-10-07 14:51:51.993560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.388 qpair failed and we were unable to recover it. 00:41:28.388 [2024-10-07 14:51:52.003652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.388 [2024-10-07 14:51:52.003727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.388 [2024-10-07 14:51:52.003749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.388 [2024-10-07 14:51:52.003760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.388 [2024-10-07 14:51:52.003770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.388 [2024-10-07 14:51:52.003791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.388 qpair failed and we were unable to recover it. 00:41:28.388 [2024-10-07 14:51:52.013460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.388 [2024-10-07 14:51:52.013524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.388 [2024-10-07 14:51:52.013546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.388 [2024-10-07 14:51:52.013557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.388 [2024-10-07 14:51:52.013566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.388 [2024-10-07 14:51:52.013587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.388 qpair failed and we were unable to recover it. 00:41:28.388 [2024-10-07 14:51:52.023485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.388 [2024-10-07 14:51:52.023550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.388 [2024-10-07 14:51:52.023571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.388 [2024-10-07 14:51:52.023582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.388 [2024-10-07 14:51:52.023591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.388 [2024-10-07 14:51:52.023614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.388 qpair failed and we were unable to recover it. 00:41:28.388 [2024-10-07 14:51:52.033526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.388 [2024-10-07 14:51:52.033596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.388 [2024-10-07 14:51:52.033619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.388 [2024-10-07 14:51:52.033634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.388 [2024-10-07 14:51:52.033644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.388 [2024-10-07 14:51:52.033666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.388 qpair failed and we were unable to recover it. 00:41:28.388 [2024-10-07 14:51:52.043728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.388 [2024-10-07 14:51:52.043805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.388 [2024-10-07 14:51:52.043826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.388 [2024-10-07 14:51:52.043838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.388 [2024-10-07 14:51:52.043847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.388 [2024-10-07 14:51:52.043869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.388 qpair failed and we were unable to recover it. 00:41:28.388 [2024-10-07 14:51:52.053520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.388 [2024-10-07 14:51:52.053588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.388 [2024-10-07 14:51:52.053609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.388 [2024-10-07 14:51:52.053621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.388 [2024-10-07 14:51:52.053630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.388 [2024-10-07 14:51:52.053651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.388 qpair failed and we were unable to recover it. 00:41:28.388 [2024-10-07 14:51:52.063605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.388 [2024-10-07 14:51:52.063674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.388 [2024-10-07 14:51:52.063695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.388 [2024-10-07 14:51:52.063707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.388 [2024-10-07 14:51:52.063716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.388 [2024-10-07 14:51:52.063737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.388 qpair failed and we were unable to recover it. 00:41:28.388 [2024-10-07 14:51:52.073594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.388 [2024-10-07 14:51:52.073665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.389 [2024-10-07 14:51:52.073687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.389 [2024-10-07 14:51:52.073698] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.389 [2024-10-07 14:51:52.073708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.389 [2024-10-07 14:51:52.073729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.389 qpair failed and we were unable to recover it. 00:41:28.389 [2024-10-07 14:51:52.083884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.389 [2024-10-07 14:51:52.083957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.389 [2024-10-07 14:51:52.083978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.389 [2024-10-07 14:51:52.083993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.389 [2024-10-07 14:51:52.084009] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.389 [2024-10-07 14:51:52.084031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.389 qpair failed and we were unable to recover it. 00:41:28.389 [2024-10-07 14:51:52.093675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.389 [2024-10-07 14:51:52.093745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.389 [2024-10-07 14:51:52.093767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.389 [2024-10-07 14:51:52.093778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.389 [2024-10-07 14:51:52.093788] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.389 [2024-10-07 14:51:52.093810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.389 qpair failed and we were unable to recover it. 00:41:28.651 [2024-10-07 14:51:52.103722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.651 [2024-10-07 14:51:52.103827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.651 [2024-10-07 14:51:52.103849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.651 [2024-10-07 14:51:52.103861] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.651 [2024-10-07 14:51:52.103871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.651 [2024-10-07 14:51:52.103892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.651 qpair failed and we were unable to recover it. 00:41:28.651 [2024-10-07 14:51:52.113701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.651 [2024-10-07 14:51:52.113774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.651 [2024-10-07 14:51:52.113795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.651 [2024-10-07 14:51:52.113806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.651 [2024-10-07 14:51:52.113816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.651 [2024-10-07 14:51:52.113837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.651 qpair failed and we were unable to recover it. 00:41:28.651 [2024-10-07 14:51:52.124023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.651 [2024-10-07 14:51:52.124102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.651 [2024-10-07 14:51:52.124124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.651 [2024-10-07 14:51:52.124136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.651 [2024-10-07 14:51:52.124145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.651 [2024-10-07 14:51:52.124167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.651 qpair failed and we were unable to recover it. 00:41:28.651 [2024-10-07 14:51:52.133788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.651 [2024-10-07 14:51:52.133858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.651 [2024-10-07 14:51:52.133879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.651 [2024-10-07 14:51:52.133890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.651 [2024-10-07 14:51:52.133899] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.651 [2024-10-07 14:51:52.133921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.651 qpair failed and we were unable to recover it. 00:41:28.651 [2024-10-07 14:51:52.143825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.651 [2024-10-07 14:51:52.143892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.651 [2024-10-07 14:51:52.143913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.651 [2024-10-07 14:51:52.143925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.651 [2024-10-07 14:51:52.143934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.651 [2024-10-07 14:51:52.143955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.651 qpair failed and we were unable to recover it. 00:41:28.651 [2024-10-07 14:51:52.153846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.651 [2024-10-07 14:51:52.153916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.651 [2024-10-07 14:51:52.153937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.651 [2024-10-07 14:51:52.153949] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.651 [2024-10-07 14:51:52.153959] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.651 [2024-10-07 14:51:52.154008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.651 qpair failed and we were unable to recover it. 00:41:28.651 [2024-10-07 14:51:52.164090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.651 [2024-10-07 14:51:52.164167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.651 [2024-10-07 14:51:52.164188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.651 [2024-10-07 14:51:52.164199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.651 [2024-10-07 14:51:52.164209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.651 [2024-10-07 14:51:52.164235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.651 qpair failed and we were unable to recover it. 00:41:28.651 [2024-10-07 14:51:52.173839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.651 [2024-10-07 14:51:52.173906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.651 [2024-10-07 14:51:52.173932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.651 [2024-10-07 14:51:52.173943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.651 [2024-10-07 14:51:52.173953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.651 [2024-10-07 14:51:52.173974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.651 qpair failed and we were unable to recover it. 00:41:28.651 [2024-10-07 14:51:52.183927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.651 [2024-10-07 14:51:52.183990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.651 [2024-10-07 14:51:52.184021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.651 [2024-10-07 14:51:52.184034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.651 [2024-10-07 14:51:52.184044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.651 [2024-10-07 14:51:52.184066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.651 qpair failed and we were unable to recover it. 00:41:28.651 [2024-10-07 14:51:52.193894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.651 [2024-10-07 14:51:52.193998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.651 [2024-10-07 14:51:52.194025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.651 [2024-10-07 14:51:52.194037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.651 [2024-10-07 14:51:52.194046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.651 [2024-10-07 14:51:52.194068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.651 qpair failed and we were unable to recover it. 00:41:28.651 [2024-10-07 14:51:52.204096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.651 [2024-10-07 14:51:52.204192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.651 [2024-10-07 14:51:52.204214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.651 [2024-10-07 14:51:52.204225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.651 [2024-10-07 14:51:52.204235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.651 [2024-10-07 14:51:52.204256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.651 qpair failed and we were unable to recover it. 00:41:28.651 [2024-10-07 14:51:52.214044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.651 [2024-10-07 14:51:52.214110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.651 [2024-10-07 14:51:52.214132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.651 [2024-10-07 14:51:52.214143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.651 [2024-10-07 14:51:52.214152] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.651 [2024-10-07 14:51:52.214178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.651 qpair failed and we were unable to recover it. 00:41:28.651 [2024-10-07 14:51:52.224054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.651 [2024-10-07 14:51:52.224137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.651 [2024-10-07 14:51:52.224159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.651 [2024-10-07 14:51:52.224171] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.651 [2024-10-07 14:51:52.224180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.652 [2024-10-07 14:51:52.224202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.652 qpair failed and we were unable to recover it. 00:41:28.652 [2024-10-07 14:51:52.234061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.652 [2024-10-07 14:51:52.234130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.652 [2024-10-07 14:51:52.234151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.652 [2024-10-07 14:51:52.234162] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.652 [2024-10-07 14:51:52.234172] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.652 [2024-10-07 14:51:52.234194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.652 qpair failed and we were unable to recover it. 00:41:28.652 [2024-10-07 14:51:52.244300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.652 [2024-10-07 14:51:52.244375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.652 [2024-10-07 14:51:52.244397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.652 [2024-10-07 14:51:52.244409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.652 [2024-10-07 14:51:52.244418] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.652 [2024-10-07 14:51:52.244439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.652 qpair failed and we were unable to recover it. 00:41:28.652 [2024-10-07 14:51:52.254099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.652 [2024-10-07 14:51:52.254166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.652 [2024-10-07 14:51:52.254188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.652 [2024-10-07 14:51:52.254199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.652 [2024-10-07 14:51:52.254208] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.652 [2024-10-07 14:51:52.254230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.652 qpair failed and we were unable to recover it. 00:41:28.652 [2024-10-07 14:51:52.264145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.652 [2024-10-07 14:51:52.264210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.652 [2024-10-07 14:51:52.264234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.652 [2024-10-07 14:51:52.264246] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.652 [2024-10-07 14:51:52.264255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.652 [2024-10-07 14:51:52.264277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.652 qpair failed and we were unable to recover it. 00:41:28.652 [2024-10-07 14:51:52.274175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.652 [2024-10-07 14:51:52.274239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.652 [2024-10-07 14:51:52.274260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.652 [2024-10-07 14:51:52.274271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.652 [2024-10-07 14:51:52.274281] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.652 [2024-10-07 14:51:52.274302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.652 qpair failed and we were unable to recover it. 00:41:28.652 [2024-10-07 14:51:52.284397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.652 [2024-10-07 14:51:52.284480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.652 [2024-10-07 14:51:52.284501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.652 [2024-10-07 14:51:52.284514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.652 [2024-10-07 14:51:52.284524] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.652 [2024-10-07 14:51:52.284546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.652 qpair failed and we were unable to recover it. 00:41:28.652 [2024-10-07 14:51:52.294238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.652 [2024-10-07 14:51:52.294304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.652 [2024-10-07 14:51:52.294325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.652 [2024-10-07 14:51:52.294336] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.652 [2024-10-07 14:51:52.294345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.652 [2024-10-07 14:51:52.294367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.652 qpair failed and we were unable to recover it. 00:41:28.652 [2024-10-07 14:51:52.304280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.652 [2024-10-07 14:51:52.304351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.652 [2024-10-07 14:51:52.304373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.652 [2024-10-07 14:51:52.304384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.652 [2024-10-07 14:51:52.304394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.652 [2024-10-07 14:51:52.304419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.652 qpair failed and we were unable to recover it. 00:41:28.652 [2024-10-07 14:51:52.314294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.652 [2024-10-07 14:51:52.314386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.652 [2024-10-07 14:51:52.314408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.652 [2024-10-07 14:51:52.314419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.652 [2024-10-07 14:51:52.314428] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.652 [2024-10-07 14:51:52.314450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.652 qpair failed and we were unable to recover it. 00:41:28.652 [2024-10-07 14:51:52.324469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.652 [2024-10-07 14:51:52.324568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.652 [2024-10-07 14:51:52.324590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.652 [2024-10-07 14:51:52.324602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.652 [2024-10-07 14:51:52.324611] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.652 [2024-10-07 14:51:52.324632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.652 qpair failed and we were unable to recover it. 00:41:28.652 [2024-10-07 14:51:52.334357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.652 [2024-10-07 14:51:52.334437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.652 [2024-10-07 14:51:52.334458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.652 [2024-10-07 14:51:52.334470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.652 [2024-10-07 14:51:52.334480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.652 [2024-10-07 14:51:52.334500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.652 qpair failed and we were unable to recover it. 00:41:28.652 [2024-10-07 14:51:52.344352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.652 [2024-10-07 14:51:52.344426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.652 [2024-10-07 14:51:52.344447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.652 [2024-10-07 14:51:52.344458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.652 [2024-10-07 14:51:52.344468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.652 [2024-10-07 14:51:52.344489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.652 qpair failed and we were unable to recover it. 00:41:28.652 [2024-10-07 14:51:52.354449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.652 [2024-10-07 14:51:52.354540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.652 [2024-10-07 14:51:52.354565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.652 [2024-10-07 14:51:52.354577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.652 [2024-10-07 14:51:52.354586] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.652 [2024-10-07 14:51:52.354607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.652 qpair failed and we were unable to recover it. 00:41:28.914 [2024-10-07 14:51:52.364592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.914 [2024-10-07 14:51:52.364669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.914 [2024-10-07 14:51:52.364691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.914 [2024-10-07 14:51:52.364702] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.914 [2024-10-07 14:51:52.364712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.914 [2024-10-07 14:51:52.364733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.914 qpair failed and we were unable to recover it. 00:41:28.914 [2024-10-07 14:51:52.374390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.914 [2024-10-07 14:51:52.374456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.914 [2024-10-07 14:51:52.374478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.914 [2024-10-07 14:51:52.374489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.914 [2024-10-07 14:51:52.374498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.914 [2024-10-07 14:51:52.374520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.914 qpair failed and we were unable to recover it. 00:41:28.914 [2024-10-07 14:51:52.384467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.914 [2024-10-07 14:51:52.384529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.914 [2024-10-07 14:51:52.384550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.914 [2024-10-07 14:51:52.384561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.914 [2024-10-07 14:51:52.384571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.914 [2024-10-07 14:51:52.384591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.914 qpair failed and we were unable to recover it. 00:41:28.914 [2024-10-07 14:51:52.394507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.914 [2024-10-07 14:51:52.394604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.914 [2024-10-07 14:51:52.394626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.914 [2024-10-07 14:51:52.394638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.914 [2024-10-07 14:51:52.394650] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.914 [2024-10-07 14:51:52.394671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.914 qpair failed and we were unable to recover it. 00:41:28.914 [2024-10-07 14:51:52.404745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.914 [2024-10-07 14:51:52.404819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.914 [2024-10-07 14:51:52.404841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.914 [2024-10-07 14:51:52.404852] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.914 [2024-10-07 14:51:52.404862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.914 [2024-10-07 14:51:52.404883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.914 qpair failed and we were unable to recover it. 00:41:28.914 [2024-10-07 14:51:52.414593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.914 [2024-10-07 14:51:52.414672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.914 [2024-10-07 14:51:52.414694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.914 [2024-10-07 14:51:52.414711] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.914 [2024-10-07 14:51:52.414720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.914 [2024-10-07 14:51:52.414741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.914 qpair failed and we were unable to recover it. 00:41:28.914 [2024-10-07 14:51:52.424584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.914 [2024-10-07 14:51:52.424653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.914 [2024-10-07 14:51:52.424675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.914 [2024-10-07 14:51:52.424686] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.914 [2024-10-07 14:51:52.424696] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.914 [2024-10-07 14:51:52.424717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.914 qpair failed and we were unable to recover it. 00:41:28.914 [2024-10-07 14:51:52.434617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.914 [2024-10-07 14:51:52.434687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.914 [2024-10-07 14:51:52.434709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.914 [2024-10-07 14:51:52.434720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.914 [2024-10-07 14:51:52.434730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.914 [2024-10-07 14:51:52.434752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.914 qpair failed and we were unable to recover it. 00:41:28.914 [2024-10-07 14:51:52.444772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.914 [2024-10-07 14:51:52.444885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.914 [2024-10-07 14:51:52.444907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.914 [2024-10-07 14:51:52.444919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.915 [2024-10-07 14:51:52.444929] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.915 [2024-10-07 14:51:52.444950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.915 qpair failed and we were unable to recover it. 00:41:28.915 [2024-10-07 14:51:52.454606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.915 [2024-10-07 14:51:52.454671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.915 [2024-10-07 14:51:52.454692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.915 [2024-10-07 14:51:52.454704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.915 [2024-10-07 14:51:52.454712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.915 [2024-10-07 14:51:52.454734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.915 qpair failed and we were unable to recover it. 00:41:28.915 [2024-10-07 14:51:52.464704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.915 [2024-10-07 14:51:52.464768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.915 [2024-10-07 14:51:52.464789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.915 [2024-10-07 14:51:52.464800] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.915 [2024-10-07 14:51:52.464809] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.915 [2024-10-07 14:51:52.464831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.915 qpair failed and we were unable to recover it. 00:41:28.915 [2024-10-07 14:51:52.474774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.915 [2024-10-07 14:51:52.474841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.915 [2024-10-07 14:51:52.474863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.915 [2024-10-07 14:51:52.474874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.915 [2024-10-07 14:51:52.474883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.915 [2024-10-07 14:51:52.474905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.915 qpair failed and we were unable to recover it. 00:41:28.915 [2024-10-07 14:51:52.485033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.915 [2024-10-07 14:51:52.485108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.915 [2024-10-07 14:51:52.485129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.915 [2024-10-07 14:51:52.485141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.915 [2024-10-07 14:51:52.485153] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.915 [2024-10-07 14:51:52.485175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.915 qpair failed and we were unable to recover it. 00:41:28.915 [2024-10-07 14:51:52.494805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.915 [2024-10-07 14:51:52.494904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.915 [2024-10-07 14:51:52.494926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.915 [2024-10-07 14:51:52.494937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.915 [2024-10-07 14:51:52.494947] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.915 [2024-10-07 14:51:52.494971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.915 qpair failed and we were unable to recover it. 00:41:28.915 [2024-10-07 14:51:52.504815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.915 [2024-10-07 14:51:52.504884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.915 [2024-10-07 14:51:52.504906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.915 [2024-10-07 14:51:52.504917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.915 [2024-10-07 14:51:52.504927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.915 [2024-10-07 14:51:52.504948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.915 qpair failed and we were unable to recover it. 00:41:28.915 [2024-10-07 14:51:52.514856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.915 [2024-10-07 14:51:52.514969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.915 [2024-10-07 14:51:52.514991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.915 [2024-10-07 14:51:52.515010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.915 [2024-10-07 14:51:52.515020] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.915 [2024-10-07 14:51:52.515042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.915 qpair failed and we were unable to recover it. 00:41:28.915 [2024-10-07 14:51:52.525076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.915 [2024-10-07 14:51:52.525149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.915 [2024-10-07 14:51:52.525170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.915 [2024-10-07 14:51:52.525182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.915 [2024-10-07 14:51:52.525192] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.915 [2024-10-07 14:51:52.525213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.915 qpair failed and we were unable to recover it. 00:41:28.915 [2024-10-07 14:51:52.534916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.915 [2024-10-07 14:51:52.535018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.915 [2024-10-07 14:51:52.535040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.915 [2024-10-07 14:51:52.535052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.915 [2024-10-07 14:51:52.535061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.915 [2024-10-07 14:51:52.535083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.915 qpair failed and we were unable to recover it. 00:41:28.915 [2024-10-07 14:51:52.544866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.915 [2024-10-07 14:51:52.544965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.915 [2024-10-07 14:51:52.544987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.915 [2024-10-07 14:51:52.544998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.915 [2024-10-07 14:51:52.545014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.915 [2024-10-07 14:51:52.545036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.915 qpair failed and we were unable to recover it. 00:41:28.915 [2024-10-07 14:51:52.554954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.915 [2024-10-07 14:51:52.555048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.915 [2024-10-07 14:51:52.555070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.915 [2024-10-07 14:51:52.555082] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.915 [2024-10-07 14:51:52.555091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.915 [2024-10-07 14:51:52.555113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.915 qpair failed and we were unable to recover it. 00:41:28.915 [2024-10-07 14:51:52.565198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.915 [2024-10-07 14:51:52.565284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.915 [2024-10-07 14:51:52.565305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.915 [2024-10-07 14:51:52.565317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.915 [2024-10-07 14:51:52.565326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.915 [2024-10-07 14:51:52.565348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.915 qpair failed and we were unable to recover it. 00:41:28.915 [2024-10-07 14:51:52.575013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.915 [2024-10-07 14:51:52.575079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.915 [2024-10-07 14:51:52.575101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.915 [2024-10-07 14:51:52.575115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.915 [2024-10-07 14:51:52.575125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.915 [2024-10-07 14:51:52.575146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.915 qpair failed and we were unable to recover it. 00:41:28.915 [2024-10-07 14:51:52.585035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.915 [2024-10-07 14:51:52.585145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.915 [2024-10-07 14:51:52.585166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.915 [2024-10-07 14:51:52.585178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.916 [2024-10-07 14:51:52.585187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.916 [2024-10-07 14:51:52.585209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.916 qpair failed and we were unable to recover it. 00:41:28.916 [2024-10-07 14:51:52.595045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.916 [2024-10-07 14:51:52.595113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.916 [2024-10-07 14:51:52.595134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.916 [2024-10-07 14:51:52.595146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.916 [2024-10-07 14:51:52.595155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.916 [2024-10-07 14:51:52.595177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.916 qpair failed and we were unable to recover it. 00:41:28.916 [2024-10-07 14:51:52.605269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.916 [2024-10-07 14:51:52.605344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.916 [2024-10-07 14:51:52.605365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.916 [2024-10-07 14:51:52.605377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.916 [2024-10-07 14:51:52.605386] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.916 [2024-10-07 14:51:52.605407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.916 qpair failed and we were unable to recover it. 00:41:28.916 [2024-10-07 14:51:52.615126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:28.916 [2024-10-07 14:51:52.615199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:28.916 [2024-10-07 14:51:52.615223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:28.916 [2024-10-07 14:51:52.615236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:28.916 [2024-10-07 14:51:52.615246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:28.916 [2024-10-07 14:51:52.615268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:28.916 qpair failed and we were unable to recover it. 00:41:29.177 [2024-10-07 14:51:52.625165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.177 [2024-10-07 14:51:52.625230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.177 [2024-10-07 14:51:52.625252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.177 [2024-10-07 14:51:52.625264] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.177 [2024-10-07 14:51:52.625273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.177 [2024-10-07 14:51:52.625294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.177 qpair failed and we were unable to recover it. 00:41:29.177 [2024-10-07 14:51:52.635166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.177 [2024-10-07 14:51:52.635231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.177 [2024-10-07 14:51:52.635253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.177 [2024-10-07 14:51:52.635264] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.177 [2024-10-07 14:51:52.635273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.177 [2024-10-07 14:51:52.635295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.177 qpair failed and we were unable to recover it. 00:41:29.177 [2024-10-07 14:51:52.645427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.177 [2024-10-07 14:51:52.645504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.177 [2024-10-07 14:51:52.645525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.177 [2024-10-07 14:51:52.645536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.177 [2024-10-07 14:51:52.645546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.177 [2024-10-07 14:51:52.645567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.177 qpair failed and we were unable to recover it. 00:41:29.177 [2024-10-07 14:51:52.655186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.177 [2024-10-07 14:51:52.655252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.177 [2024-10-07 14:51:52.655273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.177 [2024-10-07 14:51:52.655284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.177 [2024-10-07 14:51:52.655293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.177 [2024-10-07 14:51:52.655315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.177 qpair failed and we were unable to recover it. 00:41:29.177 [2024-10-07 14:51:52.665259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.177 [2024-10-07 14:51:52.665324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.177 [2024-10-07 14:51:52.665348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.178 [2024-10-07 14:51:52.665360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.178 [2024-10-07 14:51:52.665370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.178 [2024-10-07 14:51:52.665391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.178 qpair failed and we were unable to recover it. 00:41:29.178 [2024-10-07 14:51:52.675223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.178 [2024-10-07 14:51:52.675292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.178 [2024-10-07 14:51:52.675313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.178 [2024-10-07 14:51:52.675324] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.178 [2024-10-07 14:51:52.675334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.178 [2024-10-07 14:51:52.675355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.178 qpair failed and we were unable to recover it. 00:41:29.178 [2024-10-07 14:51:52.685522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.178 [2024-10-07 14:51:52.685600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.178 [2024-10-07 14:51:52.685621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.178 [2024-10-07 14:51:52.685633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.178 [2024-10-07 14:51:52.685643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.178 [2024-10-07 14:51:52.685666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.178 qpair failed and we were unable to recover it. 00:41:29.178 [2024-10-07 14:51:52.695446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.178 [2024-10-07 14:51:52.695519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.178 [2024-10-07 14:51:52.695541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.178 [2024-10-07 14:51:52.695552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.178 [2024-10-07 14:51:52.695562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.178 [2024-10-07 14:51:52.695584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.178 qpair failed and we were unable to recover it. 00:41:29.178 [2024-10-07 14:51:52.705385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.178 [2024-10-07 14:51:52.705450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.178 [2024-10-07 14:51:52.705471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.178 [2024-10-07 14:51:52.705482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.178 [2024-10-07 14:51:52.705491] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.178 [2024-10-07 14:51:52.705513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.178 qpair failed and we were unable to recover it. 00:41:29.178 [2024-10-07 14:51:52.715344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.178 [2024-10-07 14:51:52.715409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.178 [2024-10-07 14:51:52.715430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.178 [2024-10-07 14:51:52.715442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.178 [2024-10-07 14:51:52.715451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.178 [2024-10-07 14:51:52.715472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.178 qpair failed and we were unable to recover it. 00:41:29.178 [2024-10-07 14:51:52.725753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.178 [2024-10-07 14:51:52.725833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.178 [2024-10-07 14:51:52.725855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.178 [2024-10-07 14:51:52.725866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.178 [2024-10-07 14:51:52.725876] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.178 [2024-10-07 14:51:52.725898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.178 qpair failed and we were unable to recover it. 00:41:29.178 [2024-10-07 14:51:52.735484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.178 [2024-10-07 14:51:52.735552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.178 [2024-10-07 14:51:52.735574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.178 [2024-10-07 14:51:52.735585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.178 [2024-10-07 14:51:52.735594] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.178 [2024-10-07 14:51:52.735616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.178 qpair failed and we were unable to recover it. 00:41:29.178 [2024-10-07 14:51:52.745528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.178 [2024-10-07 14:51:52.745609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.178 [2024-10-07 14:51:52.745631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.178 [2024-10-07 14:51:52.745643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.178 [2024-10-07 14:51:52.745653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.178 [2024-10-07 14:51:52.745673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.178 qpair failed and we were unable to recover it. 00:41:29.178 [2024-10-07 14:51:52.755546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.178 [2024-10-07 14:51:52.755615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.178 [2024-10-07 14:51:52.755640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.178 [2024-10-07 14:51:52.755651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.178 [2024-10-07 14:51:52.755661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.178 [2024-10-07 14:51:52.755682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.178 qpair failed and we were unable to recover it. 00:41:29.178 [2024-10-07 14:51:52.765663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.178 [2024-10-07 14:51:52.765735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.178 [2024-10-07 14:51:52.765756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.178 [2024-10-07 14:51:52.765768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.178 [2024-10-07 14:51:52.765777] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.178 [2024-10-07 14:51:52.765798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.178 qpair failed and we were unable to recover it. 00:41:29.178 [2024-10-07 14:51:52.775587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.178 [2024-10-07 14:51:52.775672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.178 [2024-10-07 14:51:52.775693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.178 [2024-10-07 14:51:52.775704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.178 [2024-10-07 14:51:52.775713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.178 [2024-10-07 14:51:52.775736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.178 qpair failed and we were unable to recover it. 00:41:29.178 [2024-10-07 14:51:52.785616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.178 [2024-10-07 14:51:52.785682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.178 [2024-10-07 14:51:52.785703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.178 [2024-10-07 14:51:52.785715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.178 [2024-10-07 14:51:52.785724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.178 [2024-10-07 14:51:52.785745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.178 qpair failed and we were unable to recover it. 00:41:29.178 [2024-10-07 14:51:52.795710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.178 [2024-10-07 14:51:52.795784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.178 [2024-10-07 14:51:52.795805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.178 [2024-10-07 14:51:52.795817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.178 [2024-10-07 14:51:52.795826] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.178 [2024-10-07 14:51:52.795852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.178 qpair failed and we were unable to recover it. 00:41:29.178 [2024-10-07 14:51:52.805884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.179 [2024-10-07 14:51:52.805961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.179 [2024-10-07 14:51:52.805983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.179 [2024-10-07 14:51:52.805995] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.179 [2024-10-07 14:51:52.806011] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.179 [2024-10-07 14:51:52.806033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.179 qpair failed and we were unable to recover it. 00:41:29.179 [2024-10-07 14:51:52.815683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.179 [2024-10-07 14:51:52.815755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.179 [2024-10-07 14:51:52.815777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.179 [2024-10-07 14:51:52.815788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.179 [2024-10-07 14:51:52.815798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.179 [2024-10-07 14:51:52.815819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.179 qpair failed and we were unable to recover it. 00:41:29.179 [2024-10-07 14:51:52.825636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.179 [2024-10-07 14:51:52.825703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.179 [2024-10-07 14:51:52.825724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.179 [2024-10-07 14:51:52.825735] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.179 [2024-10-07 14:51:52.825744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.179 [2024-10-07 14:51:52.825770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.179 qpair failed and we were unable to recover it. 00:41:29.179 [2024-10-07 14:51:52.835739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.179 [2024-10-07 14:51:52.835811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.179 [2024-10-07 14:51:52.835831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.179 [2024-10-07 14:51:52.835843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.179 [2024-10-07 14:51:52.835852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.179 [2024-10-07 14:51:52.835873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.179 qpair failed and we were unable to recover it. 00:41:29.179 [2024-10-07 14:51:52.845958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.179 [2024-10-07 14:51:52.846059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.179 [2024-10-07 14:51:52.846085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.179 [2024-10-07 14:51:52.846096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.179 [2024-10-07 14:51:52.846105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.179 [2024-10-07 14:51:52.846127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.179 qpair failed and we were unable to recover it. 00:41:29.179 [2024-10-07 14:51:52.855794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.179 [2024-10-07 14:51:52.855860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.179 [2024-10-07 14:51:52.855881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.179 [2024-10-07 14:51:52.855893] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.179 [2024-10-07 14:51:52.855902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.179 [2024-10-07 14:51:52.855923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.179 qpair failed and we were unable to recover it. 00:41:29.179 [2024-10-07 14:51:52.865920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.179 [2024-10-07 14:51:52.865991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.179 [2024-10-07 14:51:52.866019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.179 [2024-10-07 14:51:52.866030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.179 [2024-10-07 14:51:52.866040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.179 [2024-10-07 14:51:52.866061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.179 qpair failed and we were unable to recover it. 00:41:29.179 [2024-10-07 14:51:52.875840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.179 [2024-10-07 14:51:52.875907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.179 [2024-10-07 14:51:52.875928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.179 [2024-10-07 14:51:52.875940] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.179 [2024-10-07 14:51:52.875949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.179 [2024-10-07 14:51:52.875971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.179 qpair failed and we were unable to recover it. 00:41:29.441 [2024-10-07 14:51:52.886064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.441 [2024-10-07 14:51:52.886142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.441 [2024-10-07 14:51:52.886163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.441 [2024-10-07 14:51:52.886175] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.441 [2024-10-07 14:51:52.886187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.441 [2024-10-07 14:51:52.886211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.441 qpair failed and we were unable to recover it. 00:41:29.441 [2024-10-07 14:51:52.895903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.441 [2024-10-07 14:51:52.896019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.441 [2024-10-07 14:51:52.896042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.441 [2024-10-07 14:51:52.896055] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.441 [2024-10-07 14:51:52.896064] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.441 [2024-10-07 14:51:52.896086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.441 qpair failed and we were unable to recover it. 00:41:29.441 [2024-10-07 14:51:52.905840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.441 [2024-10-07 14:51:52.905907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.441 [2024-10-07 14:51:52.905930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.441 [2024-10-07 14:51:52.905943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.441 [2024-10-07 14:51:52.905953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.441 [2024-10-07 14:51:52.905977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.441 qpair failed and we were unable to recover it. 00:41:29.441 [2024-10-07 14:51:52.915964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.441 [2024-10-07 14:51:52.916041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.441 [2024-10-07 14:51:52.916064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.441 [2024-10-07 14:51:52.916075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.441 [2024-10-07 14:51:52.916085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.441 [2024-10-07 14:51:52.916107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.441 qpair failed and we were unable to recover it. 00:41:29.441 [2024-10-07 14:51:52.926225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.441 [2024-10-07 14:51:52.926301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.441 [2024-10-07 14:51:52.926322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.441 [2024-10-07 14:51:52.926334] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.441 [2024-10-07 14:51:52.926352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.441 [2024-10-07 14:51:52.926374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.441 qpair failed and we were unable to recover it. 00:41:29.441 [2024-10-07 14:51:52.936031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.441 [2024-10-07 14:51:52.936108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.441 [2024-10-07 14:51:52.936130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.441 [2024-10-07 14:51:52.936141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.441 [2024-10-07 14:51:52.936150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.441 [2024-10-07 14:51:52.936172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.441 qpair failed and we were unable to recover it. 00:41:29.441 [2024-10-07 14:51:52.946057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.441 [2024-10-07 14:51:52.946137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.441 [2024-10-07 14:51:52.946158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.441 [2024-10-07 14:51:52.946169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.441 [2024-10-07 14:51:52.946179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.441 [2024-10-07 14:51:52.946201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.441 qpair failed and we were unable to recover it. 00:41:29.441 [2024-10-07 14:51:52.956057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.441 [2024-10-07 14:51:52.956125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.441 [2024-10-07 14:51:52.956147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.441 [2024-10-07 14:51:52.956159] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.441 [2024-10-07 14:51:52.956169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.441 [2024-10-07 14:51:52.956190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.441 qpair failed and we were unable to recover it. 00:41:29.441 [2024-10-07 14:51:52.966286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.441 [2024-10-07 14:51:52.966365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.441 [2024-10-07 14:51:52.966386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.441 [2024-10-07 14:51:52.966398] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.441 [2024-10-07 14:51:52.966407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.441 [2024-10-07 14:51:52.966428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.441 qpair failed and we were unable to recover it. 00:41:29.441 [2024-10-07 14:51:52.976181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.441 [2024-10-07 14:51:52.976246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.441 [2024-10-07 14:51:52.976267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.441 [2024-10-07 14:51:52.976279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.441 [2024-10-07 14:51:52.976292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.441 [2024-10-07 14:51:52.976313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.441 qpair failed and we were unable to recover it. 00:41:29.441 [2024-10-07 14:51:52.986164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.441 [2024-10-07 14:51:52.986231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.441 [2024-10-07 14:51:52.986252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.441 [2024-10-07 14:51:52.986264] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.441 [2024-10-07 14:51:52.986273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.441 [2024-10-07 14:51:52.986294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.441 qpair failed and we were unable to recover it. 00:41:29.441 [2024-10-07 14:51:52.996175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.441 [2024-10-07 14:51:52.996238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.441 [2024-10-07 14:51:52.996261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.442 [2024-10-07 14:51:52.996273] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.442 [2024-10-07 14:51:52.996282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.442 [2024-10-07 14:51:52.996304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.442 qpair failed and we were unable to recover it. 00:41:29.442 [2024-10-07 14:51:53.006376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.442 [2024-10-07 14:51:53.006456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.442 [2024-10-07 14:51:53.006477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.442 [2024-10-07 14:51:53.006489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.442 [2024-10-07 14:51:53.006498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.442 [2024-10-07 14:51:53.006519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.442 qpair failed and we were unable to recover it. 00:41:29.442 [2024-10-07 14:51:53.016292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.442 [2024-10-07 14:51:53.016358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.442 [2024-10-07 14:51:53.016380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.442 [2024-10-07 14:51:53.016391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.442 [2024-10-07 14:51:53.016401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.442 [2024-10-07 14:51:53.016423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.442 qpair failed and we were unable to recover it. 00:41:29.442 [2024-10-07 14:51:53.026309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.442 [2024-10-07 14:51:53.026403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.442 [2024-10-07 14:51:53.026424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.442 [2024-10-07 14:51:53.026436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.442 [2024-10-07 14:51:53.026445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.442 [2024-10-07 14:51:53.026467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.442 qpair failed and we were unable to recover it. 00:41:29.442 [2024-10-07 14:51:53.036280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.442 [2024-10-07 14:51:53.036349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.442 [2024-10-07 14:51:53.036370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.442 [2024-10-07 14:51:53.036382] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.442 [2024-10-07 14:51:53.036391] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.442 [2024-10-07 14:51:53.036412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.442 qpair failed and we were unable to recover it. 00:41:29.442 [2024-10-07 14:51:53.046489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.442 [2024-10-07 14:51:53.046571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.442 [2024-10-07 14:51:53.046592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.442 [2024-10-07 14:51:53.046604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.442 [2024-10-07 14:51:53.046613] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.442 [2024-10-07 14:51:53.046634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.442 qpair failed and we were unable to recover it. 00:41:29.442 [2024-10-07 14:51:53.056351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.442 [2024-10-07 14:51:53.056459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.442 [2024-10-07 14:51:53.056480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.442 [2024-10-07 14:51:53.056491] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.442 [2024-10-07 14:51:53.056501] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.442 [2024-10-07 14:51:53.056522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.442 qpair failed and we were unable to recover it. 00:41:29.442 [2024-10-07 14:51:53.066372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.442 [2024-10-07 14:51:53.066443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.442 [2024-10-07 14:51:53.066464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.442 [2024-10-07 14:51:53.066478] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.442 [2024-10-07 14:51:53.066488] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.442 [2024-10-07 14:51:53.066510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.442 qpair failed and we were unable to recover it. 00:41:29.442 [2024-10-07 14:51:53.076418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.442 [2024-10-07 14:51:53.076513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.442 [2024-10-07 14:51:53.076536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.442 [2024-10-07 14:51:53.076547] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.442 [2024-10-07 14:51:53.076556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.442 [2024-10-07 14:51:53.076578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.442 qpair failed and we were unable to recover it. 00:41:29.442 [2024-10-07 14:51:53.086615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.442 [2024-10-07 14:51:53.086695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.442 [2024-10-07 14:51:53.086716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.442 [2024-10-07 14:51:53.086727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.442 [2024-10-07 14:51:53.086737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.442 [2024-10-07 14:51:53.086758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.442 qpair failed and we were unable to recover it. 00:41:29.442 [2024-10-07 14:51:53.096500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.442 [2024-10-07 14:51:53.096575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.442 [2024-10-07 14:51:53.096596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.442 [2024-10-07 14:51:53.096607] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.442 [2024-10-07 14:51:53.096616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.442 [2024-10-07 14:51:53.096638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.442 qpair failed and we were unable to recover it. 00:41:29.442 [2024-10-07 14:51:53.106471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.442 [2024-10-07 14:51:53.106570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.442 [2024-10-07 14:51:53.106591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.442 [2024-10-07 14:51:53.106602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.442 [2024-10-07 14:51:53.106612] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.442 [2024-10-07 14:51:53.106632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.442 qpair failed and we were unable to recover it. 00:41:29.442 [2024-10-07 14:51:53.116492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.442 [2024-10-07 14:51:53.116575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.442 [2024-10-07 14:51:53.116597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.442 [2024-10-07 14:51:53.116609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.442 [2024-10-07 14:51:53.116618] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.442 [2024-10-07 14:51:53.116639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.442 qpair failed and we were unable to recover it. 00:41:29.442 [2024-10-07 14:51:53.126745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.442 [2024-10-07 14:51:53.126844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.442 [2024-10-07 14:51:53.126866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.442 [2024-10-07 14:51:53.126877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.442 [2024-10-07 14:51:53.126886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.442 [2024-10-07 14:51:53.126907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.442 qpair failed and we were unable to recover it. 00:41:29.442 [2024-10-07 14:51:53.136569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.442 [2024-10-07 14:51:53.136671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.442 [2024-10-07 14:51:53.136703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.442 [2024-10-07 14:51:53.136717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.443 [2024-10-07 14:51:53.136728] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.443 [2024-10-07 14:51:53.136755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.443 qpair failed and we were unable to recover it. 00:41:29.443 [2024-10-07 14:51:53.146547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.443 [2024-10-07 14:51:53.146655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.443 [2024-10-07 14:51:53.146680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.443 [2024-10-07 14:51:53.146692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.443 [2024-10-07 14:51:53.146702] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.443 [2024-10-07 14:51:53.146725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.443 qpair failed and we were unable to recover it. 00:41:29.705 [2024-10-07 14:51:53.156623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.705 [2024-10-07 14:51:53.156689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.705 [2024-10-07 14:51:53.156711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.705 [2024-10-07 14:51:53.156727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.705 [2024-10-07 14:51:53.156736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.705 [2024-10-07 14:51:53.156772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.705 qpair failed and we were unable to recover it. 00:41:29.705 [2024-10-07 14:51:53.166818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.705 [2024-10-07 14:51:53.166933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.705 [2024-10-07 14:51:53.166955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.705 [2024-10-07 14:51:53.166966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.705 [2024-10-07 14:51:53.166976] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.705 [2024-10-07 14:51:53.166997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.705 qpair failed and we were unable to recover it. 00:41:29.705 [2024-10-07 14:51:53.176690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.705 [2024-10-07 14:51:53.176756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.705 [2024-10-07 14:51:53.176777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.705 [2024-10-07 14:51:53.176789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.705 [2024-10-07 14:51:53.176798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.705 [2024-10-07 14:51:53.176819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.705 qpair failed and we were unable to recover it. 00:41:29.705 [2024-10-07 14:51:53.186682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.705 [2024-10-07 14:51:53.186751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.705 [2024-10-07 14:51:53.186778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.705 [2024-10-07 14:51:53.186790] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.705 [2024-10-07 14:51:53.186800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.705 [2024-10-07 14:51:53.186822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.705 qpair failed and we were unable to recover it. 00:41:29.705 [2024-10-07 14:51:53.196750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.705 [2024-10-07 14:51:53.196817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.705 [2024-10-07 14:51:53.196839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.705 [2024-10-07 14:51:53.196850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.705 [2024-10-07 14:51:53.196859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.705 [2024-10-07 14:51:53.196880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.705 qpair failed and we were unable to recover it. 00:41:29.705 [2024-10-07 14:51:53.206971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.705 [2024-10-07 14:51:53.207054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.705 [2024-10-07 14:51:53.207076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.705 [2024-10-07 14:51:53.207089] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.705 [2024-10-07 14:51:53.207098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.705 [2024-10-07 14:51:53.207119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.705 qpair failed and we were unable to recover it. 00:41:29.705 [2024-10-07 14:51:53.216772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.705 [2024-10-07 14:51:53.216842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.705 [2024-10-07 14:51:53.216864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.705 [2024-10-07 14:51:53.216875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.705 [2024-10-07 14:51:53.216885] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.705 [2024-10-07 14:51:53.216906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.705 qpair failed and we were unable to recover it. 00:41:29.705 [2024-10-07 14:51:53.226824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.705 [2024-10-07 14:51:53.226891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.705 [2024-10-07 14:51:53.226912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.705 [2024-10-07 14:51:53.226924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.705 [2024-10-07 14:51:53.226933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.705 [2024-10-07 14:51:53.226955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.705 qpair failed and we were unable to recover it. 00:41:29.705 [2024-10-07 14:51:53.236750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.705 [2024-10-07 14:51:53.236821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.705 [2024-10-07 14:51:53.236843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.705 [2024-10-07 14:51:53.236854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.705 [2024-10-07 14:51:53.236864] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.705 [2024-10-07 14:51:53.236885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.705 qpair failed and we were unable to recover it. 00:41:29.705 [2024-10-07 14:51:53.247060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.705 [2024-10-07 14:51:53.247162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.705 [2024-10-07 14:51:53.247189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.705 [2024-10-07 14:51:53.247200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.705 [2024-10-07 14:51:53.247209] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.705 [2024-10-07 14:51:53.247231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.705 qpair failed and we were unable to recover it. 00:41:29.705 [2024-10-07 14:51:53.256902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.705 [2024-10-07 14:51:53.256967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.705 [2024-10-07 14:51:53.256989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.705 [2024-10-07 14:51:53.257006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.705 [2024-10-07 14:51:53.257015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.706 [2024-10-07 14:51:53.257037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.706 qpair failed and we were unable to recover it. 00:41:29.706 [2024-10-07 14:51:53.266913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.706 [2024-10-07 14:51:53.267019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.706 [2024-10-07 14:51:53.267041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.706 [2024-10-07 14:51:53.267053] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.706 [2024-10-07 14:51:53.267062] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.706 [2024-10-07 14:51:53.267084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.706 qpair failed and we were unable to recover it. 00:41:29.706 [2024-10-07 14:51:53.276921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.706 [2024-10-07 14:51:53.276985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.706 [2024-10-07 14:51:53.277013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.706 [2024-10-07 14:51:53.277024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.706 [2024-10-07 14:51:53.277034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.706 [2024-10-07 14:51:53.277055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.706 qpair failed and we were unable to recover it. 00:41:29.706 [2024-10-07 14:51:53.287193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.706 [2024-10-07 14:51:53.287274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.706 [2024-10-07 14:51:53.287295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.706 [2024-10-07 14:51:53.287306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.706 [2024-10-07 14:51:53.287316] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.706 [2024-10-07 14:51:53.287341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.706 qpair failed and we were unable to recover it. 00:41:29.706 [2024-10-07 14:51:53.296985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.706 [2024-10-07 14:51:53.297059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.706 [2024-10-07 14:51:53.297081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.706 [2024-10-07 14:51:53.297092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.706 [2024-10-07 14:51:53.297101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.706 [2024-10-07 14:51:53.297124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.706 qpair failed and we were unable to recover it. 00:41:29.706 [2024-10-07 14:51:53.306985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.706 [2024-10-07 14:51:53.307061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.706 [2024-10-07 14:51:53.307082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.706 [2024-10-07 14:51:53.307094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.706 [2024-10-07 14:51:53.307104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.706 [2024-10-07 14:51:53.307125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.706 qpair failed and we were unable to recover it. 00:41:29.706 [2024-10-07 14:51:53.317063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.706 [2024-10-07 14:51:53.317130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.706 [2024-10-07 14:51:53.317152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.706 [2024-10-07 14:51:53.317163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.706 [2024-10-07 14:51:53.317172] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.706 [2024-10-07 14:51:53.317194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.706 qpair failed and we were unable to recover it. 00:41:29.706 [2024-10-07 14:51:53.327259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.706 [2024-10-07 14:51:53.327332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.706 [2024-10-07 14:51:53.327354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.706 [2024-10-07 14:51:53.327365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.706 [2024-10-07 14:51:53.327375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.706 [2024-10-07 14:51:53.327396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.706 qpair failed and we were unable to recover it. 00:41:29.706 [2024-10-07 14:51:53.337107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.706 [2024-10-07 14:51:53.337182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.706 [2024-10-07 14:51:53.337210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.706 [2024-10-07 14:51:53.337223] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.706 [2024-10-07 14:51:53.337232] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.706 [2024-10-07 14:51:53.337254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.706 qpair failed and we were unable to recover it. 00:41:29.706 [2024-10-07 14:51:53.347074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.706 [2024-10-07 14:51:53.347148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.706 [2024-10-07 14:51:53.347170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.706 [2024-10-07 14:51:53.347181] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.706 [2024-10-07 14:51:53.347190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.706 [2024-10-07 14:51:53.347212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.706 qpair failed and we were unable to recover it. 00:41:29.706 [2024-10-07 14:51:53.357199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.706 [2024-10-07 14:51:53.357271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.706 [2024-10-07 14:51:53.357293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.706 [2024-10-07 14:51:53.357304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.706 [2024-10-07 14:51:53.357314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.706 [2024-10-07 14:51:53.357335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.706 qpair failed and we were unable to recover it. 00:41:29.706 [2024-10-07 14:51:53.367391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.706 [2024-10-07 14:51:53.367468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.706 [2024-10-07 14:51:53.367490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.706 [2024-10-07 14:51:53.367501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.706 [2024-10-07 14:51:53.367510] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.706 [2024-10-07 14:51:53.367532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.706 qpair failed and we were unable to recover it. 00:41:29.706 [2024-10-07 14:51:53.377239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.706 [2024-10-07 14:51:53.377343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.706 [2024-10-07 14:51:53.377365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.706 [2024-10-07 14:51:53.377376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.706 [2024-10-07 14:51:53.377386] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.706 [2024-10-07 14:51:53.377412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.706 qpair failed and we were unable to recover it. 00:41:29.706 [2024-10-07 14:51:53.387225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.706 [2024-10-07 14:51:53.387295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.706 [2024-10-07 14:51:53.387316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.706 [2024-10-07 14:51:53.387328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.706 [2024-10-07 14:51:53.387337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.706 [2024-10-07 14:51:53.387358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.706 qpair failed and we were unable to recover it. 00:41:29.706 [2024-10-07 14:51:53.397264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.706 [2024-10-07 14:51:53.397333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.706 [2024-10-07 14:51:53.397355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.706 [2024-10-07 14:51:53.397367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.706 [2024-10-07 14:51:53.397377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.706 [2024-10-07 14:51:53.397399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.706 qpair failed and we were unable to recover it. 00:41:29.707 [2024-10-07 14:51:53.407500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.707 [2024-10-07 14:51:53.407579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.707 [2024-10-07 14:51:53.407601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.707 [2024-10-07 14:51:53.407612] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.707 [2024-10-07 14:51:53.407622] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.707 [2024-10-07 14:51:53.407643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.707 qpair failed and we were unable to recover it. 00:41:29.968 [2024-10-07 14:51:53.417252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.968 [2024-10-07 14:51:53.417317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.968 [2024-10-07 14:51:53.417339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.968 [2024-10-07 14:51:53.417350] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.968 [2024-10-07 14:51:53.417359] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.968 [2024-10-07 14:51:53.417380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.968 qpair failed and we were unable to recover it. 00:41:29.968 [2024-10-07 14:51:53.427257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.968 [2024-10-07 14:51:53.427341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.968 [2024-10-07 14:51:53.427362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.968 [2024-10-07 14:51:53.427374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.968 [2024-10-07 14:51:53.427384] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.968 [2024-10-07 14:51:53.427405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.968 qpair failed and we were unable to recover it. 00:41:29.968 [2024-10-07 14:51:53.437444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.968 [2024-10-07 14:51:53.437514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.968 [2024-10-07 14:51:53.437535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.968 [2024-10-07 14:51:53.437548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.968 [2024-10-07 14:51:53.437558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.968 [2024-10-07 14:51:53.437585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.968 qpair failed and we were unable to recover it. 00:41:29.968 [2024-10-07 14:51:53.447620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.968 [2024-10-07 14:51:53.447704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.968 [2024-10-07 14:51:53.447726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.968 [2024-10-07 14:51:53.447738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.968 [2024-10-07 14:51:53.447748] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.968 [2024-10-07 14:51:53.447769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.968 qpair failed and we were unable to recover it. 00:41:29.968 [2024-10-07 14:51:53.457417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.968 [2024-10-07 14:51:53.457481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.968 [2024-10-07 14:51:53.457503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.968 [2024-10-07 14:51:53.457514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.968 [2024-10-07 14:51:53.457523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.968 [2024-10-07 14:51:53.457544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.968 qpair failed and we were unable to recover it. 00:41:29.969 [2024-10-07 14:51:53.467455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.969 [2024-10-07 14:51:53.467558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.969 [2024-10-07 14:51:53.467581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.969 [2024-10-07 14:51:53.467592] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.969 [2024-10-07 14:51:53.467604] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.969 [2024-10-07 14:51:53.467627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.969 qpair failed and we were unable to recover it. 00:41:29.969 [2024-10-07 14:51:53.477480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.969 [2024-10-07 14:51:53.477552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.969 [2024-10-07 14:51:53.477574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.969 [2024-10-07 14:51:53.477585] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.969 [2024-10-07 14:51:53.477595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.969 [2024-10-07 14:51:53.477616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.969 qpair failed and we were unable to recover it. 00:41:29.969 [2024-10-07 14:51:53.487709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.969 [2024-10-07 14:51:53.487792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.969 [2024-10-07 14:51:53.487813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.969 [2024-10-07 14:51:53.487825] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.969 [2024-10-07 14:51:53.487835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.969 [2024-10-07 14:51:53.487861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.969 qpair failed and we were unable to recover it. 00:41:29.969 [2024-10-07 14:51:53.497548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.969 [2024-10-07 14:51:53.497612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.969 [2024-10-07 14:51:53.497634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.969 [2024-10-07 14:51:53.497645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.969 [2024-10-07 14:51:53.497655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.969 [2024-10-07 14:51:53.497677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.969 qpair failed and we were unable to recover it. 00:41:29.969 [2024-10-07 14:51:53.507594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.969 [2024-10-07 14:51:53.507674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.969 [2024-10-07 14:51:53.507695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.969 [2024-10-07 14:51:53.507707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.969 [2024-10-07 14:51:53.507717] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.969 [2024-10-07 14:51:53.507737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.969 qpair failed and we were unable to recover it. 00:41:29.969 [2024-10-07 14:51:53.517599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.969 [2024-10-07 14:51:53.517670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.969 [2024-10-07 14:51:53.517691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.969 [2024-10-07 14:51:53.517703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.969 [2024-10-07 14:51:53.517712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.969 [2024-10-07 14:51:53.517733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.969 qpair failed and we were unable to recover it. 00:41:29.969 [2024-10-07 14:51:53.527852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.969 [2024-10-07 14:51:53.527932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.969 [2024-10-07 14:51:53.527953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.969 [2024-10-07 14:51:53.527965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.969 [2024-10-07 14:51:53.527974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.969 [2024-10-07 14:51:53.527995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.969 qpair failed and we were unable to recover it. 00:41:29.969 [2024-10-07 14:51:53.537572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.969 [2024-10-07 14:51:53.537634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.969 [2024-10-07 14:51:53.537655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.969 [2024-10-07 14:51:53.537666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.969 [2024-10-07 14:51:53.537675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.969 [2024-10-07 14:51:53.537698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.969 qpair failed and we were unable to recover it. 00:41:29.969 [2024-10-07 14:51:53.547679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.969 [2024-10-07 14:51:53.547748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.969 [2024-10-07 14:51:53.547769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.969 [2024-10-07 14:51:53.547780] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.969 [2024-10-07 14:51:53.547790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.969 [2024-10-07 14:51:53.547811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.969 qpair failed and we were unable to recover it. 00:41:29.969 [2024-10-07 14:51:53.557760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.969 [2024-10-07 14:51:53.557828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.969 [2024-10-07 14:51:53.557849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.969 [2024-10-07 14:51:53.557863] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.969 [2024-10-07 14:51:53.557873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.969 [2024-10-07 14:51:53.557895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.969 qpair failed and we were unable to recover it. 00:41:29.969 [2024-10-07 14:51:53.567935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.969 [2024-10-07 14:51:53.568016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.969 [2024-10-07 14:51:53.568038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.969 [2024-10-07 14:51:53.568049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.969 [2024-10-07 14:51:53.568059] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.969 [2024-10-07 14:51:53.568080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.969 qpair failed and we were unable to recover it. 00:41:29.969 [2024-10-07 14:51:53.577834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.969 [2024-10-07 14:51:53.577905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.970 [2024-10-07 14:51:53.577926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.970 [2024-10-07 14:51:53.577938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.970 [2024-10-07 14:51:53.577947] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.970 [2024-10-07 14:51:53.577969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.970 qpair failed and we were unable to recover it. 00:41:29.970 [2024-10-07 14:51:53.587809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.970 [2024-10-07 14:51:53.587878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.970 [2024-10-07 14:51:53.587900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.970 [2024-10-07 14:51:53.587911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.970 [2024-10-07 14:51:53.587921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.970 [2024-10-07 14:51:53.587942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.970 qpair failed and we were unable to recover it. 00:41:29.970 [2024-10-07 14:51:53.597810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.970 [2024-10-07 14:51:53.597875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.970 [2024-10-07 14:51:53.597896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.970 [2024-10-07 14:51:53.597908] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.970 [2024-10-07 14:51:53.597917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.970 [2024-10-07 14:51:53.597938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.970 qpair failed and we were unable to recover it. 00:41:29.970 [2024-10-07 14:51:53.608080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.970 [2024-10-07 14:51:53.608159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.970 [2024-10-07 14:51:53.608180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.970 [2024-10-07 14:51:53.608191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.970 [2024-10-07 14:51:53.608200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.970 [2024-10-07 14:51:53.608222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.970 qpair failed and we were unable to recover it. 00:41:29.970 [2024-10-07 14:51:53.617890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.970 [2024-10-07 14:51:53.617953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.970 [2024-10-07 14:51:53.617975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.970 [2024-10-07 14:51:53.617986] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.970 [2024-10-07 14:51:53.617995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.970 [2024-10-07 14:51:53.618025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.970 qpair failed and we were unable to recover it. 00:41:29.970 [2024-10-07 14:51:53.627909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.970 [2024-10-07 14:51:53.628020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.970 [2024-10-07 14:51:53.628042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.970 [2024-10-07 14:51:53.628053] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.970 [2024-10-07 14:51:53.628063] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.970 [2024-10-07 14:51:53.628085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.970 qpair failed and we were unable to recover it. 00:41:29.970 [2024-10-07 14:51:53.637940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.970 [2024-10-07 14:51:53.638014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.970 [2024-10-07 14:51:53.638036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.970 [2024-10-07 14:51:53.638047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.970 [2024-10-07 14:51:53.638057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.970 [2024-10-07 14:51:53.638079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.970 qpair failed and we were unable to recover it. 00:41:29.970 [2024-10-07 14:51:53.648151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.970 [2024-10-07 14:51:53.648236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.970 [2024-10-07 14:51:53.648257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.970 [2024-10-07 14:51:53.648274] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.970 [2024-10-07 14:51:53.648283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.970 [2024-10-07 14:51:53.648305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.970 qpair failed and we were unable to recover it. 00:41:29.970 [2024-10-07 14:51:53.657909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.970 [2024-10-07 14:51:53.657980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.970 [2024-10-07 14:51:53.658010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.970 [2024-10-07 14:51:53.658022] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.970 [2024-10-07 14:51:53.658031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.970 [2024-10-07 14:51:53.658054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.970 qpair failed and we were unable to recover it. 00:41:29.970 [2024-10-07 14:51:53.668037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:29.970 [2024-10-07 14:51:53.668106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:29.970 [2024-10-07 14:51:53.668128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:29.970 [2024-10-07 14:51:53.668139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:29.970 [2024-10-07 14:51:53.668148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:29.970 [2024-10-07 14:51:53.668170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:29.970 qpair failed and we were unable to recover it. 00:41:30.231 [2024-10-07 14:51:53.678058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:30.231 [2024-10-07 14:51:53.678127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:30.231 [2024-10-07 14:51:53.678148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:30.232 [2024-10-07 14:51:53.678159] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:30.232 [2024-10-07 14:51:53.678169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:30.232 [2024-10-07 14:51:53.678190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:30.232 qpair failed and we were unable to recover it. 00:41:30.232 [2024-10-07 14:51:53.688253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:30.232 [2024-10-07 14:51:53.688335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:30.232 [2024-10-07 14:51:53.688356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:30.232 [2024-10-07 14:51:53.688367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:30.232 [2024-10-07 14:51:53.688377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:30.232 [2024-10-07 14:51:53.688399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:30.232 qpair failed and we were unable to recover it. 00:41:30.232 [2024-10-07 14:51:53.698092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:30.232 [2024-10-07 14:51:53.698164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:30.232 [2024-10-07 14:51:53.698185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:30.232 [2024-10-07 14:51:53.698204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:30.232 [2024-10-07 14:51:53.698214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:30.232 [2024-10-07 14:51:53.698235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:30.232 qpair failed and we were unable to recover it. 00:41:30.232 [2024-10-07 14:51:53.708160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:30.232 [2024-10-07 14:51:53.708227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:30.232 [2024-10-07 14:51:53.708248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:30.232 [2024-10-07 14:51:53.708259] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:30.232 [2024-10-07 14:51:53.708269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:30.232 [2024-10-07 14:51:53.708289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:30.232 qpair failed and we were unable to recover it. 00:41:30.232 [2024-10-07 14:51:53.718115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:30.232 [2024-10-07 14:51:53.718184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:30.232 [2024-10-07 14:51:53.718205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:30.232 [2024-10-07 14:51:53.718217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:30.232 [2024-10-07 14:51:53.718226] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:30.232 [2024-10-07 14:51:53.718248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:30.232 qpair failed and we were unable to recover it. 00:41:30.232 [2024-10-07 14:51:53.728395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:30.232 [2024-10-07 14:51:53.728473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:30.232 [2024-10-07 14:51:53.728494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:30.232 [2024-10-07 14:51:53.728506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:30.232 [2024-10-07 14:51:53.728515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:30.232 [2024-10-07 14:51:53.728536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:30.232 qpair failed and we were unable to recover it. 00:41:30.232 [2024-10-07 14:51:53.738245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:30.232 [2024-10-07 14:51:53.738310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:30.232 [2024-10-07 14:51:53.738334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:30.232 [2024-10-07 14:51:53.738346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:30.232 [2024-10-07 14:51:53.738355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:30.232 [2024-10-07 14:51:53.738376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:30.232 qpair failed and we were unable to recover it. 00:41:30.232 [2024-10-07 14:51:53.748233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:30.232 [2024-10-07 14:51:53.748299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:30.232 [2024-10-07 14:51:53.748320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:30.232 [2024-10-07 14:51:53.748331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:30.232 [2024-10-07 14:51:53.748340] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:30.232 [2024-10-07 14:51:53.748362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:30.232 qpair failed and we were unable to recover it. 00:41:30.232 [2024-10-07 14:51:53.758295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:30.232 [2024-10-07 14:51:53.758371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:30.232 [2024-10-07 14:51:53.758394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:30.232 [2024-10-07 14:51:53.758407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:30.232 [2024-10-07 14:51:53.758417] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500039f100 00:41:30.232 [2024-10-07 14:51:53.758441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:41:30.232 qpair failed and we were unable to recover it. 00:41:30.232 Read completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Read completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Read completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Read completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Read completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Read completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Read completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Read completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Read completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Read completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Read completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Read completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Read completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Read completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Write completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Write completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Write completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Write completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Write completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Read completed with error (sct=0, sc=8) 00:41:30.232 starting I/O failed 00:41:30.232 Read completed with error (sct=0, sc=8) 00:41:30.233 starting I/O failed 00:41:30.233 Write completed with error (sct=0, sc=8) 00:41:30.233 starting I/O failed 00:41:30.233 Write completed with error (sct=0, sc=8) 00:41:30.233 starting I/O failed 00:41:30.233 Write completed with error (sct=0, sc=8) 00:41:30.233 starting I/O failed 00:41:30.233 Write completed with error (sct=0, sc=8) 00:41:30.233 starting I/O failed 00:41:30.233 Write completed with error (sct=0, sc=8) 00:41:30.233 starting I/O failed 00:41:30.233 Read completed with error (sct=0, sc=8) 00:41:30.233 starting I/O failed 00:41:30.233 Write completed with error (sct=0, sc=8) 00:41:30.233 starting I/O failed 00:41:30.233 Write completed with error (sct=0, sc=8) 00:41:30.233 starting I/O failed 00:41:30.233 Read completed with error (sct=0, sc=8) 00:41:30.233 starting I/O failed 00:41:30.233 Write completed with error (sct=0, sc=8) 00:41:30.233 starting I/O failed 00:41:30.233 Write completed with error (sct=0, sc=8) 00:41:30.233 starting I/O failed 00:41:30.233 [2024-10-07 14:51:53.759104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:30.233 [2024-10-07 14:51:53.768522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:30.233 [2024-10-07 14:51:53.768593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:30.233 [2024-10-07 14:51:53.768615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:30.233 [2024-10-07 14:51:53.768625] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:30.233 [2024-10-07 14:51:53.768633] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:41:30.233 [2024-10-07 14:51:53.768653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:30.233 qpair failed and we were unable to recover it. 00:41:30.233 [2024-10-07 14:51:53.778306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:30.233 [2024-10-07 14:51:53.778365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:30.233 [2024-10-07 14:51:53.778384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:30.233 [2024-10-07 14:51:53.778393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:30.233 [2024-10-07 14:51:53.778400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003aff00 00:41:30.233 [2024-10-07 14:51:53.778419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:41:30.233 qpair failed and we were unable to recover it. 00:41:30.233 [2024-10-07 14:51:53.788657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:30.233 [2024-10-07 14:51:53.788807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:30.233 [2024-10-07 14:51:53.788888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:30.233 [2024-10-07 14:51:53.788929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:30.233 [2024-10-07 14:51:53.788959] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003c0080 00:41:30.233 [2024-10-07 14:51:53.789048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:41:30.233 qpair failed and we were unable to recover it. 00:41:30.233 [2024-10-07 14:51:53.798440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:41:30.233 [2024-10-07 14:51:53.798546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:41:30.233 [2024-10-07 14:51:53.798598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:41:30.233 [2024-10-07 14:51:53.798628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:41:30.233 [2024-10-07 14:51:53.798652] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150003c0080 00:41:30.233 [2024-10-07 14:51:53.798706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:41:30.233 qpair failed and we were unable to recover it. 00:41:30.233 [2024-10-07 14:51:53.799327] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:41:30.233 A controller has encountered a failure and is being reset. 00:41:30.233 Controller properly reset. 00:41:30.233 Initializing NVMe Controllers 00:41:30.233 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:30.233 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:30.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:41:30.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:41:30.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:41:30.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:41:30.233 Initialization complete. Launching workers. 00:41:30.233 Starting thread on core 1 00:41:30.233 Starting thread on core 2 00:41:30.233 Starting thread on core 3 00:41:30.233 Starting thread on core 0 00:41:30.493 14:51:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:41:30.493 00:41:30.493 real 0m11.608s 00:41:30.493 user 0m21.006s 00:41:30.493 sys 0m3.695s 00:41:30.493 14:51:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:30.493 14:51:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:41:30.493 ************************************ 00:41:30.493 END TEST nvmf_target_disconnect_tc2 00:41:30.493 ************************************ 00:41:30.493 14:51:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:41:30.493 14:51:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:41:30.493 14:51:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:41:30.493 14:51:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:30.493 14:51:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:30.494 rmmod nvme_tcp 00:41:30.494 rmmod nvme_fabrics 00:41:30.494 rmmod nvme_keyring 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 3308588 ']' 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 3308588 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3308588 ']' 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3308588 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3308588 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3308588' 00:41:30.494 killing process with pid 3308588 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3308588 00:41:30.494 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3308588 00:41:31.435 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:31.435 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:31.435 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:31.435 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:41:31.435 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:41:31.435 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:31.435 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:41:31.435 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:31.435 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:31.435 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:31.435 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:31.435 14:51:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:33.345 14:51:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:33.345 00:41:33.345 real 0m22.509s 00:41:33.345 user 0m49.985s 00:41:33.345 sys 0m9.962s 00:41:33.345 14:51:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:33.345 14:51:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:41:33.345 ************************************ 00:41:33.345 END TEST nvmf_target_disconnect 00:41:33.345 ************************************ 00:41:33.345 14:51:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:41:33.345 00:41:33.345 real 8m23.816s 00:41:33.345 user 18m35.269s 00:41:33.345 sys 2m28.865s 00:41:33.345 14:51:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:33.345 14:51:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:41:33.345 ************************************ 00:41:33.345 END TEST nvmf_host 00:41:33.345 ************************************ 00:41:33.345 14:51:57 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:41:33.345 14:51:57 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:41:33.345 14:51:57 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:41:33.345 14:51:57 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:33.345 14:51:57 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:33.345 14:51:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:33.607 ************************************ 00:41:33.607 START TEST nvmf_target_core_interrupt_mode 00:41:33.607 ************************************ 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:41:33.607 * Looking for test storage... 00:41:33.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:33.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.607 --rc genhtml_branch_coverage=1 00:41:33.607 --rc genhtml_function_coverage=1 00:41:33.607 --rc genhtml_legend=1 00:41:33.607 --rc geninfo_all_blocks=1 00:41:33.607 --rc geninfo_unexecuted_blocks=1 00:41:33.607 00:41:33.607 ' 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:33.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.607 --rc genhtml_branch_coverage=1 00:41:33.607 --rc genhtml_function_coverage=1 00:41:33.607 --rc genhtml_legend=1 00:41:33.607 --rc geninfo_all_blocks=1 00:41:33.607 --rc geninfo_unexecuted_blocks=1 00:41:33.607 00:41:33.607 ' 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:33.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.607 --rc genhtml_branch_coverage=1 00:41:33.607 --rc genhtml_function_coverage=1 00:41:33.607 --rc genhtml_legend=1 00:41:33.607 --rc geninfo_all_blocks=1 00:41:33.607 --rc geninfo_unexecuted_blocks=1 00:41:33.607 00:41:33.607 ' 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:33.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.607 --rc genhtml_branch_coverage=1 00:41:33.607 --rc genhtml_function_coverage=1 00:41:33.607 --rc genhtml_legend=1 00:41:33.607 --rc geninfo_all_blocks=1 00:41:33.607 --rc geninfo_unexecuted_blocks=1 00:41:33.607 00:41:33.607 ' 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:41:33.607 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:33.608 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:33.869 ************************************ 00:41:33.869 START TEST nvmf_abort 00:41:33.869 ************************************ 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:41:33.869 * Looking for test storage... 00:41:33.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:33.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.869 --rc genhtml_branch_coverage=1 00:41:33.869 --rc genhtml_function_coverage=1 00:41:33.869 --rc genhtml_legend=1 00:41:33.869 --rc geninfo_all_blocks=1 00:41:33.869 --rc geninfo_unexecuted_blocks=1 00:41:33.869 00:41:33.869 ' 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:33.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.869 --rc genhtml_branch_coverage=1 00:41:33.869 --rc genhtml_function_coverage=1 00:41:33.869 --rc genhtml_legend=1 00:41:33.869 --rc geninfo_all_blocks=1 00:41:33.869 --rc geninfo_unexecuted_blocks=1 00:41:33.869 00:41:33.869 ' 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:33.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.869 --rc genhtml_branch_coverage=1 00:41:33.869 --rc genhtml_function_coverage=1 00:41:33.869 --rc genhtml_legend=1 00:41:33.869 --rc geninfo_all_blocks=1 00:41:33.869 --rc geninfo_unexecuted_blocks=1 00:41:33.869 00:41:33.869 ' 00:41:33.869 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:33.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.869 --rc genhtml_branch_coverage=1 00:41:33.869 --rc genhtml_function_coverage=1 00:41:33.869 --rc genhtml_legend=1 00:41:33.869 --rc geninfo_all_blocks=1 00:41:33.869 --rc geninfo_unexecuted_blocks=1 00:41:33.869 00:41:33.869 ' 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:33.870 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:34.130 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:34.130 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:34.130 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:34.130 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:34.130 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:34.130 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:34.130 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:34.130 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:41:34.130 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:41:34.130 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:34.131 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:34.131 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:34.131 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:34.131 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:34.131 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:34.131 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:34.131 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:34.131 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:34.131 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:34.131 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:41:34.131 14:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:41.262 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:41.263 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:41.263 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:41.263 Found net devices under 0000:31:00.0: cvl_0_0 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:41.263 Found net devices under 0000:31:00.1: cvl_0_1 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:41.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:41.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:41:41.263 00:41:41.263 --- 10.0.0.2 ping statistics --- 00:41:41.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:41.263 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:41.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:41.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:41:41.263 00:41:41.263 --- 10.0.0.1 ping statistics --- 00:41:41.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:41.263 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=3314422 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 3314422 00:41:41.263 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:41:41.264 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3314422 ']' 00:41:41.264 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:41.264 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:41.264 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:41.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:41.264 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:41.264 14:52:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:41.525 [2024-10-07 14:52:05.050609] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:41.525 [2024-10-07 14:52:05.052940] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:41:41.525 [2024-10-07 14:52:05.053032] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:41.525 [2024-10-07 14:52:05.201539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:41.785 [2024-10-07 14:52:05.412458] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:41.785 [2024-10-07 14:52:05.412503] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:41.785 [2024-10-07 14:52:05.412517] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:41.785 [2024-10-07 14:52:05.412527] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:41.785 [2024-10-07 14:52:05.412538] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:41.785 [2024-10-07 14:52:05.414220] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:41:41.785 [2024-10-07 14:52:05.414481] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:41:41.785 [2024-10-07 14:52:05.414504] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:41:42.045 [2024-10-07 14:52:05.662793] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:42.045 [2024-10-07 14:52:05.664024] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:42.045 [2024-10-07 14:52:05.664157] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:42.045 [2024-10-07 14:52:05.664314] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:42.305 [2024-10-07 14:52:05.859687] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:42.305 Malloc0 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:42.305 Delay0 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:41:42.305 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.306 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:42.306 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.306 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:42.306 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.306 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:42.306 [2024-10-07 14:52:05.991497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:42.306 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.306 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:42.306 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:42.306 14:52:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:42.306 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:42.306 14:52:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:41:42.566 [2024-10-07 14:52:06.180248] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:41:45.110 Initializing NVMe Controllers 00:41:45.110 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:41:45.110 controller IO queue size 128 less than required 00:41:45.110 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:41:45.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:41:45.110 Initialization complete. Launching workers. 00:41:45.110 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27284 00:41:45.110 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27341, failed to submit 66 00:41:45.110 success 27284, unsuccessful 57, failed 0 00:41:45.110 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:45.110 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:45.110 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:45.110 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:45.110 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:41:45.110 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:41:45.110 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:45.110 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:41:45.110 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:45.110 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:41:45.110 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:45.110 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:45.110 rmmod nvme_tcp 00:41:45.110 rmmod nvme_fabrics 00:41:45.110 rmmod nvme_keyring 00:41:45.110 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:45.110 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:41:45.110 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:41:45.110 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 3314422 ']' 00:41:45.111 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 3314422 00:41:45.111 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3314422 ']' 00:41:45.111 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3314422 00:41:45.111 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:41:45.111 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:45.111 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3314422 00:41:45.111 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:41:45.111 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:41:45.111 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3314422' 00:41:45.111 killing process with pid 3314422 00:41:45.111 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3314422 00:41:45.111 14:52:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3314422 00:41:46.081 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:41:46.081 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:46.081 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:46.081 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:41:46.081 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:41:46.081 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:46.081 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:41:46.081 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:46.081 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:46.082 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:46.082 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:46.082 14:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:48.047 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:48.047 00:41:48.047 real 0m14.200s 00:41:48.047 user 0m12.534s 00:41:48.047 sys 0m6.875s 00:41:48.047 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:48.047 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:41:48.047 ************************************ 00:41:48.047 END TEST nvmf_abort 00:41:48.047 ************************************ 00:41:48.047 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:41:48.047 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:41:48.047 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:48.047 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:48.047 ************************************ 00:41:48.047 START TEST nvmf_ns_hotplug_stress 00:41:48.047 ************************************ 00:41:48.047 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:41:48.047 * Looking for test storage... 00:41:48.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:48.047 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:41:48.047 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:41:48.047 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:41:48.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:48.309 --rc genhtml_branch_coverage=1 00:41:48.309 --rc genhtml_function_coverage=1 00:41:48.309 --rc genhtml_legend=1 00:41:48.309 --rc geninfo_all_blocks=1 00:41:48.309 --rc geninfo_unexecuted_blocks=1 00:41:48.309 00:41:48.309 ' 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:41:48.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:48.309 --rc genhtml_branch_coverage=1 00:41:48.309 --rc genhtml_function_coverage=1 00:41:48.309 --rc genhtml_legend=1 00:41:48.309 --rc geninfo_all_blocks=1 00:41:48.309 --rc geninfo_unexecuted_blocks=1 00:41:48.309 00:41:48.309 ' 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:41:48.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:48.309 --rc genhtml_branch_coverage=1 00:41:48.309 --rc genhtml_function_coverage=1 00:41:48.309 --rc genhtml_legend=1 00:41:48.309 --rc geninfo_all_blocks=1 00:41:48.309 --rc geninfo_unexecuted_blocks=1 00:41:48.309 00:41:48.309 ' 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:41:48.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:48.309 --rc genhtml_branch_coverage=1 00:41:48.309 --rc genhtml_function_coverage=1 00:41:48.309 --rc genhtml_legend=1 00:41:48.309 --rc geninfo_all_blocks=1 00:41:48.309 --rc geninfo_unexecuted_blocks=1 00:41:48.309 00:41:48.309 ' 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:48.309 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:48.310 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:48.310 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:48.310 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:48.310 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:48.310 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:48.310 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:41:48.310 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:48.310 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:48.310 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:48.310 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:48.310 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:48.310 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:48.310 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:48.310 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:48.310 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:48.310 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:48.310 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:41:48.310 14:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:56.446 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:56.447 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:56.447 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:56.447 Found net devices under 0000:31:00.0: cvl_0_0 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:56.447 Found net devices under 0000:31:00.1: cvl_0_1 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:56.447 14:52:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:56.447 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:56.447 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:56.447 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:56.447 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:56.447 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:56.447 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:56.447 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:56.447 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:56.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:56.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:41:56.447 00:41:56.447 --- 10.0.0.2 ping statistics --- 00:41:56.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:56.447 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:41:56.447 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:56.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:56.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:41:56.447 00:41:56.447 --- 10.0.0.1 ping statistics --- 00:41:56.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:56.447 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=3319505 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 3319505 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3319505 ']' 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:56.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:56.448 14:52:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:41:56.448 [2024-10-07 14:52:19.391267] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:56.448 [2024-10-07 14:52:19.393893] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:41:56.448 [2024-10-07 14:52:19.393991] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:56.448 [2024-10-07 14:52:19.549494] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:56.448 [2024-10-07 14:52:19.779771] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:56.448 [2024-10-07 14:52:19.779842] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:56.448 [2024-10-07 14:52:19.779857] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:56.448 [2024-10-07 14:52:19.779868] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:56.448 [2024-10-07 14:52:19.779881] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:56.448 [2024-10-07 14:52:19.782025] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:41:56.448 [2024-10-07 14:52:19.782162] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:41:56.448 [2024-10-07 14:52:19.782350] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:41:56.448 [2024-10-07 14:52:20.079836] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:56.448 [2024-10-07 14:52:20.081096] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:56.448 [2024-10-07 14:52:20.081283] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:56.448 [2024-10-07 14:52:20.081443] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:56.708 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:56.708 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:41:56.708 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:41:56.708 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:56.708 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:41:56.708 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:56.708 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:41:56.708 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:56.708 [2024-10-07 14:52:20.375721] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:56.709 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:41:56.969 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:57.230 [2024-10-07 14:52:20.756530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:57.230 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:57.491 14:52:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:41:57.491 Malloc0 00:41:57.491 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:57.752 Delay0 00:41:57.752 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:58.012 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:41:58.012 NULL1 00:41:58.274 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:41:58.274 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3319881 00:41:58.274 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:41:58.274 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:41:58.274 14:52:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:58.534 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:58.796 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:41:58.796 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:41:58.796 true 00:41:58.796 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:41:58.796 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:59.058 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:59.318 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:41:59.318 14:52:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:41:59.318 true 00:41:59.579 14:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:41:59.579 14:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:59.579 14:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:41:59.840 14:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:41:59.840 14:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:42:00.100 true 00:42:00.100 14:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:00.100 14:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:00.100 14:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:00.361 14:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:42:00.361 14:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:42:00.621 true 00:42:00.621 14:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:00.621 14:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:00.881 14:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:00.881 14:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:42:00.881 14:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:42:01.140 true 00:42:01.140 14:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:01.140 14:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:01.399 14:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:01.659 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:42:01.659 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:42:01.659 true 00:42:01.659 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:01.659 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:01.919 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:02.180 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:42:02.180 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:42:02.180 true 00:42:02.180 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:02.180 14:52:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:02.441 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:02.702 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:42:02.702 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:42:02.702 true 00:42:02.962 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:02.962 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:02.962 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:03.223 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:42:03.223 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:42:03.483 true 00:42:03.483 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:03.483 14:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:03.483 14:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:03.742 14:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:42:03.742 14:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:42:04.002 true 00:42:04.002 14:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:04.002 14:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:04.263 14:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:04.263 14:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:42:04.263 14:52:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:42:04.524 true 00:42:04.524 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:04.524 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:04.787 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:04.787 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:42:04.787 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:42:05.048 true 00:42:05.048 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:05.048 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:05.309 14:52:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:05.570 14:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:42:05.570 14:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:42:05.570 true 00:42:05.570 14:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:05.570 14:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:05.831 14:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:06.093 14:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:42:06.093 14:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:42:06.093 true 00:42:06.093 14:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:06.093 14:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:06.355 14:52:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:06.617 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:42:06.617 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:42:06.617 true 00:42:06.878 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:06.878 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:06.878 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:07.138 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:42:07.138 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:42:07.398 true 00:42:07.398 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:07.398 14:52:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:07.398 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:07.659 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:42:07.659 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:42:07.920 true 00:42:07.920 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:07.920 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:07.920 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:08.181 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:42:08.181 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:42:08.440 true 00:42:08.440 14:52:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:08.440 14:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:08.700 14:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:08.700 14:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:42:08.700 14:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:42:08.960 true 00:42:08.960 14:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:08.960 14:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:09.220 14:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:09.480 14:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:42:09.480 14:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:42:09.480 true 00:42:09.480 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:09.480 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:09.740 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:10.001 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:42:10.001 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:42:10.001 true 00:42:10.001 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:10.001 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:10.261 14:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:10.522 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:42:10.522 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:42:10.522 true 00:42:10.782 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:10.782 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:10.782 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:11.043 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:42:11.043 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:42:11.304 true 00:42:11.304 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:11.304 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:11.304 14:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:11.565 14:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:42:11.565 14:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:42:11.825 true 00:42:11.825 14:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:11.825 14:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:12.087 14:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:12.087 14:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:42:12.087 14:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:42:12.348 true 00:42:12.348 14:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:12.348 14:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:12.609 14:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:12.609 14:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:42:12.609 14:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:42:12.870 true 00:42:12.870 14:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:12.870 14:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:13.131 14:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:13.392 14:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:42:13.392 14:52:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:42:13.392 true 00:42:13.392 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:13.392 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:13.653 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:13.914 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:42:13.914 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:42:14.174 true 00:42:14.174 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:14.174 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:14.174 14:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:14.435 14:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:42:14.435 14:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:42:14.701 true 00:42:14.701 14:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:14.701 14:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:14.701 14:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:15.002 14:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:42:15.002 14:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:42:15.282 true 00:42:15.282 14:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:15.282 14:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:15.282 14:52:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:15.542 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:42:15.542 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:42:15.802 true 00:42:15.802 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:15.802 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:15.802 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:16.062 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:42:16.062 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:42:16.321 true 00:42:16.321 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:16.321 14:52:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:16.581 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:16.581 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:42:16.581 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:42:16.840 true 00:42:16.840 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:16.840 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:17.100 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:17.359 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:42:17.359 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:42:17.359 true 00:42:17.359 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:17.359 14:52:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:17.618 14:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:17.878 14:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:42:17.878 14:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:42:17.878 true 00:42:17.878 14:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:17.878 14:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:18.137 14:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:18.397 14:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:42:18.397 14:52:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:42:18.397 true 00:42:18.397 14:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:18.397 14:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:18.657 14:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:18.917 14:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:42:18.917 14:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:42:18.917 true 00:42:19.177 14:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:19.177 14:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:19.177 14:52:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:19.438 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:42:19.438 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:42:19.699 true 00:42:19.699 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:19.699 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:19.699 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:19.959 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:42:19.959 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:42:20.219 true 00:42:20.219 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:20.219 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:20.480 14:52:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:20.480 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:42:20.480 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:42:20.740 true 00:42:20.740 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:20.740 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:20.999 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:20.999 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:42:20.999 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:42:21.259 true 00:42:21.259 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:21.259 14:52:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:21.519 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:21.779 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:42:21.779 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:42:21.779 true 00:42:21.779 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:21.779 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:22.040 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:22.301 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:42:22.301 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:42:22.301 true 00:42:22.301 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:22.301 14:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:22.561 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:22.820 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:42:22.820 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:42:22.820 true 00:42:23.080 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:23.080 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:23.080 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:23.340 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:42:23.340 14:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:42:23.600 true 00:42:23.600 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:23.600 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:23.600 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:23.860 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:42:23.860 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:42:24.119 true 00:42:24.119 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:24.119 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:24.378 14:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:24.378 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:42:24.378 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:42:24.638 true 00:42:24.638 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:24.638 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:24.898 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:24.898 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:42:24.898 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:42:25.157 true 00:42:25.157 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:25.157 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:25.417 14:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:25.677 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:42:25.677 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:42:25.677 true 00:42:25.677 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:25.677 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:25.938 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:26.198 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:42:26.198 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:42:26.198 true 00:42:26.198 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:26.198 14:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:26.476 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:26.736 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:42:26.736 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:42:26.736 true 00:42:26.996 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:26.996 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:26.996 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:27.257 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:42:27.257 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:42:27.257 true 00:42:27.518 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:27.518 14:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:27.518 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:27.778 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:42:27.778 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:42:28.040 true 00:42:28.040 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:28.040 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:28.040 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:28.300 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:42:28.300 14:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:42:28.561 true 00:42:28.561 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:28.561 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:28.561 Initializing NVMe Controllers 00:42:28.561 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:28.561 Controller IO queue size 128, less than required. 00:42:28.561 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:28.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:42:28.561 Initialization complete. Launching workers. 00:42:28.561 ======================================================== 00:42:28.561 Latency(us) 00:42:28.561 Device Information : IOPS MiB/s Average min max 00:42:28.561 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27423.20 13.39 4667.38 1628.66 11955.75 00:42:28.561 ======================================================== 00:42:28.561 Total : 27423.20 13.39 4667.38 1628.66 11955.75 00:42:28.561 00:42:28.821 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:28.821 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:42:28.821 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:42:29.082 true 00:42:29.082 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3319881 00:42:29.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3319881) - No such process 00:42:29.082 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3319881 00:42:29.082 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:29.343 14:52:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:29.343 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:42:29.343 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:42:29.343 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:42:29.343 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:29.343 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:42:29.604 null0 00:42:29.604 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:29.604 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:29.604 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:42:29.864 null1 00:42:29.864 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:29.865 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:29.865 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:42:29.865 null2 00:42:29.865 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:29.865 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:29.865 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:42:30.125 null3 00:42:30.125 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:30.125 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:30.125 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:42:30.125 null4 00:42:30.125 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:30.125 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:30.125 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:42:30.385 null5 00:42:30.385 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:30.385 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:30.385 14:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:42:30.648 null6 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:42:30.648 null7 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:42:30.648 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3326061 3326062 3326064 3326067 3326069 3326071 3326073 3326075 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:30.649 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:30.911 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:30.911 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:30.911 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:30.911 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:30.911 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:30.911 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:30.911 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:30.911 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:31.172 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.172 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.172 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.172 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.172 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:31.172 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:31.172 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.172 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.172 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:31.172 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.172 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.172 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:31.172 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.172 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.173 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:31.173 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.173 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.173 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:31.173 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.173 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.173 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:31.173 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.173 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.173 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:31.432 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:31.433 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:31.433 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:31.433 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:31.433 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:31.433 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:31.433 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:31.433 14:52:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:31.433 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.433 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.433 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:31.433 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.433 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.433 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:31.433 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.433 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.433 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:31.433 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.433 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.433 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:31.433 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.433 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.433 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:31.693 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.693 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.693 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:31.693 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.693 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.693 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:31.693 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.693 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.693 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:31.693 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:31.693 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:31.693 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:31.693 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:31.693 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:31.693 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:31.693 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:31.693 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:31.953 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.953 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.953 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:31.953 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.953 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.953 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:31.954 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.215 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:32.476 14:52:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:32.476 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:32.476 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:32.476 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:32.476 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:32.476 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:32.476 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:32.476 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:32.476 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.476 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.476 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:32.476 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.476 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.476 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:32.737 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:32.738 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:32.997 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:33.257 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:33.257 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:33.257 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:33.257 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:33.257 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:33.257 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:33.257 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:33.257 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:33.257 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:33.257 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:33.257 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:33.257 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:33.257 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:33.257 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:33.517 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:33.517 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:33.517 14:52:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:33.517 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:33.778 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:34.040 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:34.040 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:34.040 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:34.040 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:34.040 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:34.040 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:34.040 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:34.040 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:42:34.040 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:34.040 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:34.040 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:42:34.040 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:34.040 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:34.040 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:34.040 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:42:34.040 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:34.040 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:34.040 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:34.300 14:52:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:34.561 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:34.561 rmmod nvme_tcp 00:42:34.561 rmmod nvme_fabrics 00:42:34.822 rmmod nvme_keyring 00:42:34.822 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:34.822 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:42:34.822 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:42:34.822 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 3319505 ']' 00:42:34.822 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 3319505 00:42:34.822 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3319505 ']' 00:42:34.822 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3319505 00:42:34.822 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:42:34.822 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:34.822 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3319505 00:42:34.822 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:34.822 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:34.822 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3319505' 00:42:34.822 killing process with pid 3319505 00:42:34.822 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3319505 00:42:34.822 14:52:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3319505 00:42:35.393 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:35.393 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:35.393 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:35.393 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:42:35.393 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:42:35.393 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:35.393 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:42:35.393 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:35.393 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:35.393 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:35.393 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:35.393 14:52:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:37.937 00:42:37.937 real 0m49.525s 00:42:37.937 user 3m3.516s 00:42:37.937 sys 0m22.364s 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:42:37.937 ************************************ 00:42:37.937 END TEST nvmf_ns_hotplug_stress 00:42:37.937 ************************************ 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:37.937 ************************************ 00:42:37.937 START TEST nvmf_delete_subsystem 00:42:37.937 ************************************ 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:42:37.937 * Looking for test storage... 00:42:37.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:37.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:37.937 --rc genhtml_branch_coverage=1 00:42:37.937 --rc genhtml_function_coverage=1 00:42:37.937 --rc genhtml_legend=1 00:42:37.937 --rc geninfo_all_blocks=1 00:42:37.937 --rc geninfo_unexecuted_blocks=1 00:42:37.937 00:42:37.937 ' 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:37.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:37.937 --rc genhtml_branch_coverage=1 00:42:37.937 --rc genhtml_function_coverage=1 00:42:37.937 --rc genhtml_legend=1 00:42:37.937 --rc geninfo_all_blocks=1 00:42:37.937 --rc geninfo_unexecuted_blocks=1 00:42:37.937 00:42:37.937 ' 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:37.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:37.937 --rc genhtml_branch_coverage=1 00:42:37.937 --rc genhtml_function_coverage=1 00:42:37.937 --rc genhtml_legend=1 00:42:37.937 --rc geninfo_all_blocks=1 00:42:37.937 --rc geninfo_unexecuted_blocks=1 00:42:37.937 00:42:37.937 ' 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:37.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:37.937 --rc genhtml_branch_coverage=1 00:42:37.937 --rc genhtml_function_coverage=1 00:42:37.937 --rc genhtml_legend=1 00:42:37.937 --rc geninfo_all_blocks=1 00:42:37.937 --rc geninfo_unexecuted_blocks=1 00:42:37.937 00:42:37.937 ' 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:37.937 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:42:37.938 14:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:46.078 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:46.078 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:46.078 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:46.079 Found net devices under 0000:31:00.0: cvl_0_0 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:46.079 Found net devices under 0000:31:00.1: cvl_0_1 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:46.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:46.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:42:46.079 00:42:46.079 --- 10.0.0.2 ping statistics --- 00:42:46.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:46.079 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:46.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:46.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:42:46.079 00:42:46.079 --- 10.0.0.1 ping statistics --- 00:42:46.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:46.079 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=3331309 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 3331309 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3331309 ']' 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:46.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:46.079 14:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:46.079 [2024-10-07 14:53:08.976895] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:46.079 [2024-10-07 14:53:08.979392] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:42:46.079 [2024-10-07 14:53:08.979478] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:46.079 [2024-10-07 14:53:09.118395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:46.079 [2024-10-07 14:53:09.300979] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:46.079 [2024-10-07 14:53:09.301034] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:46.079 [2024-10-07 14:53:09.301047] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:46.079 [2024-10-07 14:53:09.301057] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:46.079 [2024-10-07 14:53:09.301068] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:46.079 [2024-10-07 14:53:09.302550] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:42:46.079 [2024-10-07 14:53:09.302575] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:42:46.079 [2024-10-07 14:53:09.549377] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:46.079 [2024-10-07 14:53:09.549576] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:46.079 [2024-10-07 14:53:09.549687] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:46.079 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:46.079 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:42:46.079 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:42:46.079 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:46.079 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:46.080 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:46.340 [2024-10-07 14:53:09.791295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:46.340 [2024-10-07 14:53:09.823918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:46.340 NULL1 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:46.340 Delay0 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3331636 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:42:46.340 14:53:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:42:46.340 [2024-10-07 14:53:09.961394] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:42:48.251 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:48.251 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:48.251 14:53:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:48.512 Read completed with error (sct=0, sc=8) 00:42:48.512 Read completed with error (sct=0, sc=8) 00:42:48.512 Write completed with error (sct=0, sc=8) 00:42:48.512 starting I/O failed: -6 00:42:48.512 Read completed with error (sct=0, sc=8) 00:42:48.512 Write completed with error (sct=0, sc=8) 00:42:48.512 Write completed with error (sct=0, sc=8) 00:42:48.512 Write completed with error (sct=0, sc=8) 00:42:48.512 starting I/O failed: -6 00:42:48.512 Write completed with error (sct=0, sc=8) 00:42:48.512 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 [2024-10-07 14:53:12.005159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027180 is same with the state(6) to be set 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 [2024-10-07 14:53:12.005689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026780 is same with the state(6) to be set 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 starting I/O failed: -6 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 [2024-10-07 14:53:12.007959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030a00 is same with the state(6) to be set 00:42:48.513 starting I/O failed: -6 00:42:48.513 starting I/O failed: -6 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Write completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.513 Read completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Write completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Write completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Write completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Write completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Write completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Write completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Write completed with error (sct=0, sc=8) 00:42:48.514 Write completed with error (sct=0, sc=8) 00:42:48.514 Write completed with error (sct=0, sc=8) 00:42:48.514 Read completed with error (sct=0, sc=8) 00:42:48.514 Write completed with error (sct=0, sc=8) 00:42:49.456 [2024-10-07 14:53:12.988949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025d80 is same with the state(6) to be set 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 [2024-10-07 14:53:13.010019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030500 is same with the state(6) to be set 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 [2024-10-07 14:53:13.010843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000027680 is same with the state(6) to be set 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 [2024-10-07 14:53:13.011291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000026c80 is same with the state(6) to be set 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 Read completed with error (sct=0, sc=8) 00:42:49.456 Write completed with error (sct=0, sc=8) 00:42:49.456 [2024-10-07 14:53:13.013749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030f00 is same with the state(6) to be set 00:42:49.456 Initializing NVMe Controllers 00:42:49.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:49.456 Controller IO queue size 128, less than required. 00:42:49.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:49.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:42:49.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:42:49.457 Initialization complete. Launching workers. 00:42:49.457 ======================================================== 00:42:49.457 Latency(us) 00:42:49.457 Device Information : IOPS MiB/s Average min max 00:42:49.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.13 0.08 893245.82 531.94 1011107.05 00:42:49.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 167.15 0.08 903435.01 408.35 1010217.53 00:42:49.457 ======================================================== 00:42:49.457 Total : 337.28 0.16 898295.33 408.35 1011107.05 00:42:49.457 00:42:49.457 [2024-10-07 14:53:13.014901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000025d80 (9): Bad file descriptor 00:42:49.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:42:49.457 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:49.457 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:42:49.457 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3331636 00:42:49.457 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3331636 00:42:50.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3331636) - No such process 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3331636 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3331636 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3331636 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:50.028 [2024-10-07 14:53:13.543900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3332311 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3332311 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:42:50.028 14:53:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:42:50.028 [2024-10-07 14:53:13.645945] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:42:50.599 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:42:50.599 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3332311 00:42:50.599 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:42:51.169 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:42:51.169 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3332311 00:42:51.169 14:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:42:51.428 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:42:51.428 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3332311 00:42:51.428 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:42:51.998 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:42:51.998 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3332311 00:42:51.998 14:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:42:52.569 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:42:52.569 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3332311 00:42:52.569 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:42:53.141 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:42:53.141 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3332311 00:42:53.141 14:53:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:42:53.402 Initializing NVMe Controllers 00:42:53.402 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:53.402 Controller IO queue size 128, less than required. 00:42:53.402 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:42:53.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:42:53.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:42:53.402 Initialization complete. Launching workers. 00:42:53.402 ======================================================== 00:42:53.402 Latency(us) 00:42:53.402 Device Information : IOPS MiB/s Average min max 00:42:53.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002478.20 1000255.61 1006581.32 00:42:53.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004345.65 1000755.83 1010041.53 00:42:53.402 ======================================================== 00:42:53.402 Total : 256.00 0.12 1003411.92 1000255.61 1010041.53 00:42:53.402 00:42:53.402 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:42:53.402 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3332311 00:42:53.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3332311) - No such process 00:42:53.402 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3332311 00:42:53.402 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:42:53.402 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:42:53.402 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:53.402 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:42:53.402 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:53.402 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:42:53.402 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:53.402 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:53.402 rmmod nvme_tcp 00:42:53.662 rmmod nvme_fabrics 00:42:53.662 rmmod nvme_keyring 00:42:53.663 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:53.663 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:42:53.663 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:42:53.663 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 3331309 ']' 00:42:53.663 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 3331309 00:42:53.663 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3331309 ']' 00:42:53.663 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3331309 00:42:53.663 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:42:53.663 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:53.663 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3331309 00:42:53.663 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:53.663 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:53.663 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3331309' 00:42:53.663 killing process with pid 3331309 00:42:53.663 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3331309 00:42:53.663 14:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3331309 00:42:54.604 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:42:54.604 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:54.604 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:54.604 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:42:54.604 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:42:54.604 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:54.604 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:42:54.604 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:54.604 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:54.604 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:54.604 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:54.604 14:53:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:56.516 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:56.516 00:42:56.516 real 0m18.967s 00:42:56.516 user 0m27.322s 00:42:56.516 sys 0m7.651s 00:42:56.516 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:56.516 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:42:56.516 ************************************ 00:42:56.516 END TEST nvmf_delete_subsystem 00:42:56.516 ************************************ 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:56.777 ************************************ 00:42:56.777 START TEST nvmf_host_management 00:42:56.777 ************************************ 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:42:56.777 * Looking for test storage... 00:42:56.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:42:56.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:56.777 --rc genhtml_branch_coverage=1 00:42:56.777 --rc genhtml_function_coverage=1 00:42:56.777 --rc genhtml_legend=1 00:42:56.777 --rc geninfo_all_blocks=1 00:42:56.777 --rc geninfo_unexecuted_blocks=1 00:42:56.777 00:42:56.777 ' 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:42:56.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:56.777 --rc genhtml_branch_coverage=1 00:42:56.777 --rc genhtml_function_coverage=1 00:42:56.777 --rc genhtml_legend=1 00:42:56.777 --rc geninfo_all_blocks=1 00:42:56.777 --rc geninfo_unexecuted_blocks=1 00:42:56.777 00:42:56.777 ' 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:42:56.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:56.777 --rc genhtml_branch_coverage=1 00:42:56.777 --rc genhtml_function_coverage=1 00:42:56.777 --rc genhtml_legend=1 00:42:56.777 --rc geninfo_all_blocks=1 00:42:56.777 --rc geninfo_unexecuted_blocks=1 00:42:56.777 00:42:56.777 ' 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:42:56.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:56.777 --rc genhtml_branch_coverage=1 00:42:56.777 --rc genhtml_function_coverage=1 00:42:56.777 --rc genhtml_legend=1 00:42:56.777 --rc geninfo_all_blocks=1 00:42:56.777 --rc geninfo_unexecuted_blocks=1 00:42:56.777 00:42:56.777 ' 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:56.777 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:56.778 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:56.778 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:42:56.778 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:56.778 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:56.778 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:56.778 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:56.778 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:56.778 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:42:56.778 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:56.778 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:56.778 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:56.778 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:56.778 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:56.778 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:56.778 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:42:56.778 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:56.778 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:42:57.039 14:53:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:05.178 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:05.178 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:05.178 Found net devices under 0000:31:00.0: cvl_0_0 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:05.178 Found net devices under 0000:31:00.1: cvl_0_1 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:05.178 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:05.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:05.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:43:05.179 00:43:05.179 --- 10.0.0.2 ping statistics --- 00:43:05.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:05.179 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:05.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:05.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:43:05.179 00:43:05.179 --- 10.0.0.1 ping statistics --- 00:43:05.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:05.179 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=3337386 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 3337386 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3337386 ']' 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:05.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:05.179 14:53:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:05.179 [2024-10-07 14:53:27.996727] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:05.179 [2024-10-07 14:53:27.999499] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:43:05.179 [2024-10-07 14:53:27.999601] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:05.179 [2024-10-07 14:53:28.159128] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:05.179 [2024-10-07 14:53:28.390724] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:05.179 [2024-10-07 14:53:28.390801] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:05.179 [2024-10-07 14:53:28.390817] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:05.179 [2024-10-07 14:53:28.390828] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:05.179 [2024-10-07 14:53:28.390840] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:05.179 [2024-10-07 14:53:28.393767] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:43:05.179 [2024-10-07 14:53:28.393920] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:43:05.179 [2024-10-07 14:53:28.394102] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:43:05.179 [2024-10-07 14:53:28.394106] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:43:05.179 [2024-10-07 14:53:28.680989] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:05.179 [2024-10-07 14:53:28.682366] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:05.179 [2024-10-07 14:53:28.682998] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:05.179 [2024-10-07 14:53:28.683185] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:05.179 [2024-10-07 14:53:28.683364] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:05.179 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:05.179 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:43:05.179 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:05.179 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:05.179 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:05.179 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:05.179 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:05.179 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.179 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:05.179 [2024-10-07 14:53:28.803622] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:05.179 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.179 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:43:05.179 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:05.179 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:05.179 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:43:05.179 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:43:05.179 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:43:05.179 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:05.179 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:05.441 Malloc0 00:43:05.441 [2024-10-07 14:53:28.935271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3337563 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3337563 /var/tmp/bdevperf.sock 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3337563 ']' 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:05.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:05.441 { 00:43:05.441 "params": { 00:43:05.441 "name": "Nvme$subsystem", 00:43:05.441 "trtype": "$TEST_TRANSPORT", 00:43:05.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:05.441 "adrfam": "ipv4", 00:43:05.441 "trsvcid": "$NVMF_PORT", 00:43:05.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:05.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:05.441 "hdgst": ${hdgst:-false}, 00:43:05.441 "ddgst": ${ddgst:-false} 00:43:05.441 }, 00:43:05.441 "method": "bdev_nvme_attach_controller" 00:43:05.441 } 00:43:05.441 EOF 00:43:05.441 )") 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:43:05.441 14:53:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:43:05.441 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:43:05.441 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:43:05.441 "params": { 00:43:05.441 "name": "Nvme0", 00:43:05.441 "trtype": "tcp", 00:43:05.441 "traddr": "10.0.0.2", 00:43:05.441 "adrfam": "ipv4", 00:43:05.441 "trsvcid": "4420", 00:43:05.441 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:05.441 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:05.441 "hdgst": false, 00:43:05.441 "ddgst": false 00:43:05.441 }, 00:43:05.441 "method": "bdev_nvme_attach_controller" 00:43:05.441 }' 00:43:05.441 [2024-10-07 14:53:29.068125] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:43:05.441 [2024-10-07 14:53:29.068232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337563 ] 00:43:05.702 [2024-10-07 14:53:29.183946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:05.702 [2024-10-07 14:53:29.364471] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:43:06.274 Running I/O for 10 seconds... 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=142 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 142 -ge 100 ']' 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:06.274 [2024-10-07 14:53:29.902966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:43:06.274 [2024-10-07 14:53:29.903027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:43:06.274 [2024-10-07 14:53:29.903039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:43:06.274 [2024-10-07 14:53:29.903050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:43:06.274 [2024-10-07 14:53:29.903059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:43:06.274 [2024-10-07 14:53:29.903075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:43:06.274 [2024-10-07 14:53:29.903085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:43:06.274 [2024-10-07 14:53:29.903094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:43:06.274 [2024-10-07 14:53:29.903103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:43:06.274 [2024-10-07 14:53:29.903113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:43:06.274 [2024-10-07 14:53:29.906433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:43:06.274 [2024-10-07 14:53:29.906479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.274 [2024-10-07 14:53:29.906496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:43:06.274 [2024-10-07 14:53:29.906507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.274 [2024-10-07 14:53:29.906519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:43:06.274 [2024-10-07 14:53:29.906530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.274 [2024-10-07 14:53:29.906541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:43:06.274 [2024-10-07 14:53:29.906551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.274 [2024-10-07 14:53:29.906562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500039e200 is same with the state(6) to be set 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:43:06.274 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:06.274 [2024-10-07 14:53:29.916607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039e200 (9): Bad file descriptor 00:43:06.274 [2024-10-07 14:53:29.916709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.274 [2024-10-07 14:53:29.916726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.274 [2024-10-07 14:53:29.916749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.274 [2024-10-07 14:53:29.916761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.274 [2024-10-07 14:53:29.916774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.274 [2024-10-07 14:53:29.916785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.916798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.916814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.916827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.916838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.916850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.916861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.916874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.916885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.916897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.916907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.916920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.916930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.916942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.916953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.916965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.916976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.916988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.916998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.275 [2024-10-07 14:53:29.917770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.275 [2024-10-07 14:53:29.917780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.917793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.917803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.917815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.917826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.917838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.917849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.917861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.917872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.917884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.917894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.917907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.917918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.917931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.917941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.917953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.917964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.917976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.917988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.918004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.918014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.918027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.918037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.918049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.918060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.918072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.918082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.918095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.918105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.918118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.918128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.918140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.918151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.918164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.918174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.918186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.918197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.918209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:43:06.276 [2024-10-07 14:53:29.918219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:43:06.276 [2024-10-07 14:53:29.918449] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500039f100 was disconnected and freed. reset controller. 00:43:06.276 [2024-10-07 14:53:29.919701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:43:06.276 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:43:06.276 14:53:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:43:06.276 task offset: 34176 on job bdev=Nvme0n1 fails 00:43:06.276 00:43:06.276 Latency(us) 00:43:06.276 [2024-10-07T12:53:29.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:06.276 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:43:06.276 Job: Nvme0n1 ended in about 0.19 seconds with error 00:43:06.276 Verification LBA range: start 0x0 length 0x400 00:43:06.276 Nvme0n1 : 0.19 1342.00 83.88 335.50 0.00 35908.01 2307.41 39103.15 00:43:06.276 [2024-10-07T12:53:29.985Z] =================================================================================================================== 00:43:06.276 [2024-10-07T12:53:29.985Z] Total : 1342.00 83.88 335.50 0.00 35908.01 2307.41 39103.15 00:43:06.276 [2024-10-07 14:53:29.923950] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:06.276 [2024-10-07 14:53:29.976287] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:07.220 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3337563 00:43:07.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3337563) - No such process 00:43:07.220 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:43:07.220 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:43:07.481 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:43:07.481 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:43:07.481 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:43:07.481 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:43:07.481 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:43:07.481 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:43:07.481 { 00:43:07.481 "params": { 00:43:07.481 "name": "Nvme$subsystem", 00:43:07.481 "trtype": "$TEST_TRANSPORT", 00:43:07.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:07.481 "adrfam": "ipv4", 00:43:07.481 "trsvcid": "$NVMF_PORT", 00:43:07.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:07.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:07.481 "hdgst": ${hdgst:-false}, 00:43:07.481 "ddgst": ${ddgst:-false} 00:43:07.481 }, 00:43:07.481 "method": "bdev_nvme_attach_controller" 00:43:07.481 } 00:43:07.481 EOF 00:43:07.481 )") 00:43:07.481 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:43:07.481 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:43:07.481 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:43:07.481 14:53:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:43:07.481 "params": { 00:43:07.481 "name": "Nvme0", 00:43:07.481 "trtype": "tcp", 00:43:07.481 "traddr": "10.0.0.2", 00:43:07.481 "adrfam": "ipv4", 00:43:07.481 "trsvcid": "4420", 00:43:07.481 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:07.481 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:07.481 "hdgst": false, 00:43:07.481 "ddgst": false 00:43:07.481 }, 00:43:07.481 "method": "bdev_nvme_attach_controller" 00:43:07.481 }' 00:43:07.481 [2024-10-07 14:53:31.006503] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:43:07.481 [2024-10-07 14:53:31.006609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3337959 ] 00:43:07.481 [2024-10-07 14:53:31.122358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:07.742 [2024-10-07 14:53:31.300530] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:43:08.002 Running I/O for 1 seconds... 00:43:09.385 1470.00 IOPS, 91.88 MiB/s 00:43:09.385 Latency(us) 00:43:09.385 [2024-10-07T12:53:33.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:09.385 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:43:09.385 Verification LBA range: start 0x0 length 0x400 00:43:09.385 Nvme0n1 : 1.04 1471.14 91.95 0.00 0.00 42747.63 9229.65 36263.25 00:43:09.385 [2024-10-07T12:53:33.094Z] =================================================================================================================== 00:43:09.385 [2024-10-07T12:53:33.094Z] Total : 1471.14 91.95 0.00 0.00 42747.63 9229.65 36263.25 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:09.955 rmmod nvme_tcp 00:43:09.955 rmmod nvme_fabrics 00:43:09.955 rmmod nvme_keyring 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 3337386 ']' 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 3337386 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3337386 ']' 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3337386 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:43:09.955 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:09.956 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3337386 00:43:09.956 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:09.956 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:09.956 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3337386' 00:43:09.956 killing process with pid 3337386 00:43:09.956 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3337386 00:43:09.956 14:53:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3337386 00:43:10.898 [2024-10-07 14:53:34.243504] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:43:10.898 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:10.898 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:10.898 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:10.898 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:43:10.898 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:43:10.898 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:10.898 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:43:10.898 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:10.898 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:10.898 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:10.898 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:10.898 14:53:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:12.810 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:12.810 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:43:12.810 00:43:12.810 real 0m16.127s 00:43:12.810 user 0m25.069s 00:43:12.810 sys 0m7.963s 00:43:12.810 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:12.810 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:43:12.810 ************************************ 00:43:12.810 END TEST nvmf_host_management 00:43:12.810 ************************************ 00:43:12.810 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:43:12.810 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:12.810 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:12.810 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:12.810 ************************************ 00:43:12.810 START TEST nvmf_lvol 00:43:12.810 ************************************ 00:43:12.810 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:43:13.071 * Looking for test storage... 00:43:13.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:13.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:13.071 --rc genhtml_branch_coverage=1 00:43:13.071 --rc genhtml_function_coverage=1 00:43:13.071 --rc genhtml_legend=1 00:43:13.071 --rc geninfo_all_blocks=1 00:43:13.071 --rc geninfo_unexecuted_blocks=1 00:43:13.071 00:43:13.071 ' 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:13.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:13.071 --rc genhtml_branch_coverage=1 00:43:13.071 --rc genhtml_function_coverage=1 00:43:13.071 --rc genhtml_legend=1 00:43:13.071 --rc geninfo_all_blocks=1 00:43:13.071 --rc geninfo_unexecuted_blocks=1 00:43:13.071 00:43:13.071 ' 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:13.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:13.071 --rc genhtml_branch_coverage=1 00:43:13.071 --rc genhtml_function_coverage=1 00:43:13.071 --rc genhtml_legend=1 00:43:13.071 --rc geninfo_all_blocks=1 00:43:13.071 --rc geninfo_unexecuted_blocks=1 00:43:13.071 00:43:13.071 ' 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:13.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:13.071 --rc genhtml_branch_coverage=1 00:43:13.071 --rc genhtml_function_coverage=1 00:43:13.071 --rc genhtml_legend=1 00:43:13.071 --rc geninfo_all_blocks=1 00:43:13.071 --rc geninfo_unexecuted_blocks=1 00:43:13.071 00:43:13.071 ' 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:13.071 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:43:13.072 14:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:21.208 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:21.209 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:21.209 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:21.209 Found net devices under 0000:31:00.0: cvl_0_0 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:21.209 Found net devices under 0000:31:00.1: cvl_0_1 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:21.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:21.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:43:21.209 00:43:21.209 --- 10.0.0.2 ping statistics --- 00:43:21.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:21.209 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:21.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:21.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:43:21.209 00:43:21.209 --- 10.0.0.1 ping statistics --- 00:43:21.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:21.209 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=3342763 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 3342763 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3342763 ']' 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:21.209 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:21.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:21.210 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:21.210 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:43:21.210 14:53:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:43:21.210 [2024-10-07 14:53:43.824873] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:21.210 [2024-10-07 14:53:43.827145] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:43:21.210 [2024-10-07 14:53:43.827229] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:21.210 [2024-10-07 14:53:43.950208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:21.210 [2024-10-07 14:53:44.129698] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:21.210 [2024-10-07 14:53:44.129748] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:21.210 [2024-10-07 14:53:44.129761] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:21.210 [2024-10-07 14:53:44.129772] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:21.210 [2024-10-07 14:53:44.129782] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:21.210 [2024-10-07 14:53:44.131510] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:43:21.210 [2024-10-07 14:53:44.131591] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:43:21.210 [2024-10-07 14:53:44.131597] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:43:21.210 [2024-10-07 14:53:44.378515] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:21.210 [2024-10-07 14:53:44.378622] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:21.210 [2024-10-07 14:53:44.379086] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:21.210 [2024-10-07 14:53:44.379363] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:21.210 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:21.210 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:43:21.210 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:21.210 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:21.210 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:43:21.210 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:21.210 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:21.210 [2024-10-07 14:53:44.748336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:21.210 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:21.471 14:53:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:43:21.471 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:21.732 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:43:21.732 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:43:21.732 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:43:21.993 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c4e6721f-7ce7-4043-a24a-3b4cdbb73d8b 00:43:21.993 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c4e6721f-7ce7-4043-a24a-3b4cdbb73d8b lvol 20 00:43:22.254 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=fabc37b4-3846-4d76-bed7-9ff275ba7747 00:43:22.254 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:43:22.254 14:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fabc37b4-3846-4d76-bed7-9ff275ba7747 00:43:22.515 14:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:22.775 [2024-10-07 14:53:46.252557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:22.775 14:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:43:22.775 14:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3343223 00:43:22.775 14:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:43:22.775 14:53:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:43:24.161 14:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot fabc37b4-3846-4d76-bed7-9ff275ba7747 MY_SNAPSHOT 00:43:24.161 14:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ac125646-05da-48f8-a9a8-cf4d2a401b20 00:43:24.161 14:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize fabc37b4-3846-4d76-bed7-9ff275ba7747 30 00:43:24.423 14:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ac125646-05da-48f8-a9a8-cf4d2a401b20 MY_CLONE 00:43:24.423 14:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=22984ecd-bb40-4799-a928-fb73de97aed8 00:43:24.423 14:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 22984ecd-bb40-4799-a928-fb73de97aed8 00:43:24.994 14:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3343223 00:43:33.130 Initializing NVMe Controllers 00:43:33.130 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:43:33.130 Controller IO queue size 128, less than required. 00:43:33.130 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:43:33.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:43:33.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:43:33.130 Initialization complete. Launching workers. 00:43:33.130 ======================================================== 00:43:33.130 Latency(us) 00:43:33.130 Device Information : IOPS MiB/s Average min max 00:43:33.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14635.20 57.17 8747.99 607.63 128292.40 00:43:33.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11444.40 44.70 11184.14 2849.06 144137.11 00:43:33.130 ======================================================== 00:43:33.130 Total : 26079.60 101.87 9817.04 607.63 144137.11 00:43:33.130 00:43:33.130 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:33.390 14:53:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fabc37b4-3846-4d76-bed7-9ff275ba7747 00:43:33.650 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c4e6721f-7ce7-4043-a24a-3b4cdbb73d8b 00:43:33.650 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:43:33.650 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:43:33.650 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:43:33.650 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:43:33.650 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:43:33.650 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:33.650 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:43:33.650 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:33.650 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:33.650 rmmod nvme_tcp 00:43:33.650 rmmod nvme_fabrics 00:43:33.650 rmmod nvme_keyring 00:43:33.910 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:33.910 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:43:33.910 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:43:33.910 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 3342763 ']' 00:43:33.910 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 3342763 00:43:33.910 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3342763 ']' 00:43:33.910 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3342763 00:43:33.910 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:43:33.910 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:33.910 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3342763 00:43:33.910 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:33.910 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:33.910 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3342763' 00:43:33.910 killing process with pid 3342763 00:43:33.910 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3342763 00:43:33.910 14:53:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3342763 00:43:35.298 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:43:35.298 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:43:35.298 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:43:35.298 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:43:35.298 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:43:35.298 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:43:35.298 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:43:35.298 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:35.298 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:35.298 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:35.298 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:35.298 14:53:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:37.207 00:43:37.207 real 0m24.194s 00:43:37.207 user 0m56.735s 00:43:37.207 sys 0m10.205s 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:43:37.207 ************************************ 00:43:37.207 END TEST nvmf_lvol 00:43:37.207 ************************************ 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:37.207 ************************************ 00:43:37.207 START TEST nvmf_lvs_grow 00:43:37.207 ************************************ 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:43:37.207 * Looking for test storage... 00:43:37.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:37.207 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:43:37.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:37.468 --rc genhtml_branch_coverage=1 00:43:37.468 --rc genhtml_function_coverage=1 00:43:37.468 --rc genhtml_legend=1 00:43:37.468 --rc geninfo_all_blocks=1 00:43:37.468 --rc geninfo_unexecuted_blocks=1 00:43:37.468 00:43:37.468 ' 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:43:37.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:37.468 --rc genhtml_branch_coverage=1 00:43:37.468 --rc genhtml_function_coverage=1 00:43:37.468 --rc genhtml_legend=1 00:43:37.468 --rc geninfo_all_blocks=1 00:43:37.468 --rc geninfo_unexecuted_blocks=1 00:43:37.468 00:43:37.468 ' 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:43:37.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:37.468 --rc genhtml_branch_coverage=1 00:43:37.468 --rc genhtml_function_coverage=1 00:43:37.468 --rc genhtml_legend=1 00:43:37.468 --rc geninfo_all_blocks=1 00:43:37.468 --rc geninfo_unexecuted_blocks=1 00:43:37.468 00:43:37.468 ' 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:43:37.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:37.468 --rc genhtml_branch_coverage=1 00:43:37.468 --rc genhtml_function_coverage=1 00:43:37.468 --rc genhtml_legend=1 00:43:37.468 --rc geninfo_all_blocks=1 00:43:37.468 --rc geninfo_unexecuted_blocks=1 00:43:37.468 00:43:37.468 ' 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:37.468 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:43:37.469 14:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:43:44.047 Found 0000:31:00.0 (0x8086 - 0x159b) 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:43:44.047 Found 0000:31:00.1 (0x8086 - 0x159b) 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:44.047 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:43:44.048 Found net devices under 0000:31:00.0: cvl_0_0 00:43:44.048 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:44.048 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:43:44.048 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:44.048 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:43:44.048 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:44.048 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:43:44.048 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:43:44.308 Found net devices under 0000:31:00.1: cvl_0_1 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:44.308 14:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:44.308 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:44.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:44.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:43:44.308 00:43:44.308 --- 10.0.0.2 ping statistics --- 00:43:44.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:44.308 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:43:44.308 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:44.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:44.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:43:44.568 00:43:44.568 --- 10.0.0.1 ping statistics --- 00:43:44.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:44.568 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=3349734 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 3349734 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3349734 ']' 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:44.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:44.568 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:43:44.568 [2024-10-07 14:54:08.165821] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:44.568 [2024-10-07 14:54:08.168125] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:43:44.568 [2024-10-07 14:54:08.168210] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:44.837 [2024-10-07 14:54:08.290900] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:44.837 [2024-10-07 14:54:08.469122] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:44.837 [2024-10-07 14:54:08.469170] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:44.837 [2024-10-07 14:54:08.469184] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:44.837 [2024-10-07 14:54:08.469194] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:44.837 [2024-10-07 14:54:08.469204] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:44.837 [2024-10-07 14:54:08.470393] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:43:45.100 [2024-10-07 14:54:08.707465] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:45.100 [2024-10-07 14:54:08.707765] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:45.361 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:45.361 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:43:45.361 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:43:45.361 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:43:45.361 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:43:45.361 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:45.361 14:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:45.621 [2024-10-07 14:54:09.111615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:45.621 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:43:45.621 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:43:45.621 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:45.621 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:43:45.621 ************************************ 00:43:45.621 START TEST lvs_grow_clean 00:43:45.621 ************************************ 00:43:45.621 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:43:45.621 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:43:45.621 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:43:45.621 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:43:45.621 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:43:45.621 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:43:45.621 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:43:45.621 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:43:45.621 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:43:45.621 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:43:45.881 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:43:45.881 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:43:45.881 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=88444982-99a3-40a1-83cb-a54ef51c4bce 00:43:45.881 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88444982-99a3-40a1-83cb-a54ef51c4bce 00:43:45.881 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:43:46.140 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:43:46.140 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:43:46.140 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 88444982-99a3-40a1-83cb-a54ef51c4bce lvol 150 00:43:46.406 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=09aada85-0c02-41cb-b2b1-8cb1d90bdd7f 00:43:46.406 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:43:46.406 14:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:43:46.406 [2024-10-07 14:54:10.055041] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:43:46.406 [2024-10-07 14:54:10.055137] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:43:46.406 true 00:43:46.406 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88444982-99a3-40a1-83cb-a54ef51c4bce 00:43:46.406 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:43:46.677 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:43:46.677 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:43:46.998 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 09aada85-0c02-41cb-b2b1-8cb1d90bdd7f 00:43:46.998 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:47.337 [2024-10-07 14:54:10.735376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:47.337 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:43:47.337 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3350814 00:43:47.337 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:43:47.337 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:43:47.337 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3350814 /var/tmp/bdevperf.sock 00:43:47.337 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3350814 ']' 00:43:47.337 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:43:47.338 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:47.338 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:43:47.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:43:47.338 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:47.338 14:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:43:47.338 [2024-10-07 14:54:11.001212] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:43:47.338 [2024-10-07 14:54:11.001324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350814 ] 00:43:47.638 [2024-10-07 14:54:11.136861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:47.910 [2024-10-07 14:54:11.360417] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:43:48.170 14:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:48.170 14:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:43:48.170 14:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:43:48.431 Nvme0n1 00:43:48.691 14:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:43:48.691 [ 00:43:48.691 { 00:43:48.691 "name": "Nvme0n1", 00:43:48.691 "aliases": [ 00:43:48.692 "09aada85-0c02-41cb-b2b1-8cb1d90bdd7f" 00:43:48.692 ], 00:43:48.692 "product_name": "NVMe disk", 00:43:48.692 "block_size": 4096, 00:43:48.692 "num_blocks": 38912, 00:43:48.692 "uuid": "09aada85-0c02-41cb-b2b1-8cb1d90bdd7f", 00:43:48.692 "numa_id": 0, 00:43:48.692 "assigned_rate_limits": { 00:43:48.692 "rw_ios_per_sec": 0, 00:43:48.692 "rw_mbytes_per_sec": 0, 00:43:48.692 "r_mbytes_per_sec": 0, 00:43:48.692 "w_mbytes_per_sec": 0 00:43:48.692 }, 00:43:48.692 "claimed": false, 00:43:48.692 "zoned": false, 00:43:48.692 "supported_io_types": { 00:43:48.692 "read": true, 00:43:48.692 "write": true, 00:43:48.692 "unmap": true, 00:43:48.692 "flush": true, 00:43:48.692 "reset": true, 00:43:48.692 "nvme_admin": true, 00:43:48.692 "nvme_io": true, 00:43:48.692 "nvme_io_md": false, 00:43:48.692 "write_zeroes": true, 00:43:48.692 "zcopy": false, 00:43:48.692 "get_zone_info": false, 00:43:48.692 "zone_management": false, 00:43:48.692 "zone_append": false, 00:43:48.692 "compare": true, 00:43:48.692 "compare_and_write": true, 00:43:48.692 "abort": true, 00:43:48.692 "seek_hole": false, 00:43:48.692 "seek_data": false, 00:43:48.692 "copy": true, 00:43:48.692 "nvme_iov_md": false 00:43:48.692 }, 00:43:48.692 "memory_domains": [ 00:43:48.692 { 00:43:48.692 "dma_device_id": "system", 00:43:48.692 "dma_device_type": 1 00:43:48.692 } 00:43:48.692 ], 00:43:48.692 "driver_specific": { 00:43:48.692 "nvme": [ 00:43:48.692 { 00:43:48.692 "trid": { 00:43:48.692 "trtype": "TCP", 00:43:48.692 "adrfam": "IPv4", 00:43:48.692 "traddr": "10.0.0.2", 00:43:48.692 "trsvcid": "4420", 00:43:48.692 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:43:48.692 }, 00:43:48.692 "ctrlr_data": { 00:43:48.692 "cntlid": 1, 00:43:48.692 "vendor_id": "0x8086", 00:43:48.692 "model_number": "SPDK bdev Controller", 00:43:48.692 "serial_number": "SPDK0", 00:43:48.692 "firmware_revision": "25.01", 00:43:48.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:48.692 "oacs": { 00:43:48.692 "security": 0, 00:43:48.692 "format": 0, 00:43:48.692 "firmware": 0, 00:43:48.692 "ns_manage": 0 00:43:48.692 }, 00:43:48.692 "multi_ctrlr": true, 00:43:48.692 "ana_reporting": false 00:43:48.692 }, 00:43:48.692 "vs": { 00:43:48.692 "nvme_version": "1.3" 00:43:48.692 }, 00:43:48.692 "ns_data": { 00:43:48.692 "id": 1, 00:43:48.692 "can_share": true 00:43:48.692 } 00:43:48.692 } 00:43:48.692 ], 00:43:48.692 "mp_policy": "active_passive" 00:43:48.692 } 00:43:48.692 } 00:43:48.692 ] 00:43:48.692 14:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3350994 00:43:48.692 14:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:43:48.692 14:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:43:48.953 Running I/O for 10 seconds... 00:43:49.898 Latency(us) 00:43:49.898 [2024-10-07T12:54:13.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:49.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:49.898 Nvme0n1 : 1.00 15973.00 62.39 0.00 0.00 0.00 0.00 0.00 00:43:49.898 [2024-10-07T12:54:13.607Z] =================================================================================================================== 00:43:49.898 [2024-10-07T12:54:13.607Z] Total : 15973.00 62.39 0.00 0.00 0.00 0.00 0.00 00:43:49.898 00:43:50.838 14:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 88444982-99a3-40a1-83cb-a54ef51c4bce 00:43:50.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:50.838 Nvme0n1 : 2.00 16050.50 62.70 0.00 0.00 0.00 0.00 0.00 00:43:50.838 [2024-10-07T12:54:14.547Z] =================================================================================================================== 00:43:50.838 [2024-10-07T12:54:14.547Z] Total : 16050.50 62.70 0.00 0.00 0.00 0.00 0.00 00:43:50.838 00:43:50.838 true 00:43:50.838 14:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88444982-99a3-40a1-83cb-a54ef51c4bce 00:43:50.838 14:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:43:51.098 14:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:43:51.098 14:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:43:51.098 14:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3350994 00:43:52.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:52.038 Nvme0n1 : 3.00 16076.00 62.80 0.00 0.00 0.00 0.00 0.00 00:43:52.038 [2024-10-07T12:54:15.747Z] =================================================================================================================== 00:43:52.038 [2024-10-07T12:54:15.747Z] Total : 16076.00 62.80 0.00 0.00 0.00 0.00 0.00 00:43:52.038 00:43:52.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:52.982 Nvme0n1 : 4.00 16120.75 62.97 0.00 0.00 0.00 0.00 0.00 00:43:52.982 [2024-10-07T12:54:16.691Z] =================================================================================================================== 00:43:52.982 [2024-10-07T12:54:16.691Z] Total : 16120.75 62.97 0.00 0.00 0.00 0.00 0.00 00:43:52.982 00:43:53.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:53.923 Nvme0n1 : 5.00 16135.40 63.03 0.00 0.00 0.00 0.00 0.00 00:43:53.923 [2024-10-07T12:54:17.632Z] =================================================================================================================== 00:43:53.923 [2024-10-07T12:54:17.632Z] Total : 16135.40 63.03 0.00 0.00 0.00 0.00 0.00 00:43:53.923 00:43:54.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:54.863 Nvme0n1 : 6.00 16155.50 63.11 0.00 0.00 0.00 0.00 0.00 00:43:54.863 [2024-10-07T12:54:18.572Z] =================================================================================================================== 00:43:54.863 [2024-10-07T12:54:18.572Z] Total : 16155.50 63.11 0.00 0.00 0.00 0.00 0.00 00:43:54.863 00:43:55.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:55.805 Nvme0n1 : 7.00 16169.29 63.16 0.00 0.00 0.00 0.00 0.00 00:43:55.805 [2024-10-07T12:54:19.514Z] =================================================================================================================== 00:43:55.805 [2024-10-07T12:54:19.514Z] Total : 16169.29 63.16 0.00 0.00 0.00 0.00 0.00 00:43:55.805 00:43:56.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:56.746 Nvme0n1 : 8.00 16185.62 63.23 0.00 0.00 0.00 0.00 0.00 00:43:56.746 [2024-10-07T12:54:20.455Z] =================================================================================================================== 00:43:56.746 [2024-10-07T12:54:20.455Z] Total : 16185.62 63.23 0.00 0.00 0.00 0.00 0.00 00:43:56.746 00:43:58.128 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:58.128 Nvme0n1 : 9.00 16188.33 63.24 0.00 0.00 0.00 0.00 0.00 00:43:58.128 [2024-10-07T12:54:21.837Z] =================================================================================================================== 00:43:58.128 [2024-10-07T12:54:21.837Z] Total : 16188.33 63.24 0.00 0.00 0.00 0.00 0.00 00:43:58.128 00:43:59.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:59.069 Nvme0n1 : 10.00 16200.40 63.28 0.00 0.00 0.00 0.00 0.00 00:43:59.069 [2024-10-07T12:54:22.778Z] =================================================================================================================== 00:43:59.069 [2024-10-07T12:54:22.778Z] Total : 16200.40 63.28 0.00 0.00 0.00 0.00 0.00 00:43:59.069 00:43:59.069 00:43:59.069 Latency(us) 00:43:59.069 [2024-10-07T12:54:22.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:59.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:43:59.069 Nvme0n1 : 10.00 16205.17 63.30 0.00 0.00 7894.53 2389.33 14527.15 00:43:59.069 [2024-10-07T12:54:22.778Z] =================================================================================================================== 00:43:59.069 [2024-10-07T12:54:22.778Z] Total : 16205.17 63.30 0.00 0.00 7894.53 2389.33 14527.15 00:43:59.069 { 00:43:59.069 "results": [ 00:43:59.069 { 00:43:59.069 "job": "Nvme0n1", 00:43:59.069 "core_mask": "0x2", 00:43:59.069 "workload": "randwrite", 00:43:59.069 "status": "finished", 00:43:59.069 "queue_depth": 128, 00:43:59.069 "io_size": 4096, 00:43:59.069 "runtime": 10.004956, 00:43:59.069 "iops": 16205.16871838317, 00:43:59.069 "mibps": 63.301440306184254, 00:43:59.069 "io_failed": 0, 00:43:59.069 "io_timeout": 0, 00:43:59.069 "avg_latency_us": 7894.527364205298, 00:43:59.069 "min_latency_us": 2389.3333333333335, 00:43:59.069 "max_latency_us": 14527.146666666667 00:43:59.069 } 00:43:59.069 ], 00:43:59.069 "core_count": 1 00:43:59.069 } 00:43:59.069 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3350814 00:43:59.069 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3350814 ']' 00:43:59.069 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3350814 00:43:59.069 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:43:59.069 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:59.069 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3350814 00:43:59.069 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:43:59.069 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:43:59.069 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3350814' 00:43:59.069 killing process with pid 3350814 00:43:59.069 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3350814 00:43:59.069 Received shutdown signal, test time was about 10.000000 seconds 00:43:59.069 00:43:59.069 Latency(us) 00:43:59.069 [2024-10-07T12:54:22.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:59.069 [2024-10-07T12:54:22.778Z] =================================================================================================================== 00:43:59.069 [2024-10-07T12:54:22.778Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:59.069 14:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3350814 00:43:59.640 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:43:59.640 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:59.900 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88444982-99a3-40a1-83cb-a54ef51c4bce 00:43:59.900 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:43:59.900 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:43:59.900 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:43:59.900 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:44:00.161 [2024-10-07 14:54:23.711058] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:44:00.161 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88444982-99a3-40a1-83cb-a54ef51c4bce 00:44:00.161 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:44:00.161 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88444982-99a3-40a1-83cb-a54ef51c4bce 00:44:00.161 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:00.161 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:00.161 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:00.161 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:00.161 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:00.161 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:00.161 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:00.161 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:44:00.161 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88444982-99a3-40a1-83cb-a54ef51c4bce 00:44:00.423 request: 00:44:00.423 { 00:44:00.423 "uuid": "88444982-99a3-40a1-83cb-a54ef51c4bce", 00:44:00.423 "method": "bdev_lvol_get_lvstores", 00:44:00.423 "req_id": 1 00:44:00.423 } 00:44:00.423 Got JSON-RPC error response 00:44:00.423 response: 00:44:00.423 { 00:44:00.423 "code": -19, 00:44:00.423 "message": "No such device" 00:44:00.423 } 00:44:00.423 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:44:00.423 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:44:00.423 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:44:00.423 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:44:00.423 14:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:44:00.423 aio_bdev 00:44:00.423 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 09aada85-0c02-41cb-b2b1-8cb1d90bdd7f 00:44:00.423 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=09aada85-0c02-41cb-b2b1-8cb1d90bdd7f 00:44:00.423 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:44:00.423 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:44:00.423 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:44:00.423 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:44:00.423 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:44:00.683 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 09aada85-0c02-41cb-b2b1-8cb1d90bdd7f -t 2000 00:44:00.683 [ 00:44:00.683 { 00:44:00.683 "name": "09aada85-0c02-41cb-b2b1-8cb1d90bdd7f", 00:44:00.683 "aliases": [ 00:44:00.683 "lvs/lvol" 00:44:00.683 ], 00:44:00.683 "product_name": "Logical Volume", 00:44:00.683 "block_size": 4096, 00:44:00.683 "num_blocks": 38912, 00:44:00.683 "uuid": "09aada85-0c02-41cb-b2b1-8cb1d90bdd7f", 00:44:00.683 "assigned_rate_limits": { 00:44:00.683 "rw_ios_per_sec": 0, 00:44:00.683 "rw_mbytes_per_sec": 0, 00:44:00.683 "r_mbytes_per_sec": 0, 00:44:00.683 "w_mbytes_per_sec": 0 00:44:00.683 }, 00:44:00.683 "claimed": false, 00:44:00.683 "zoned": false, 00:44:00.683 "supported_io_types": { 00:44:00.683 "read": true, 00:44:00.683 "write": true, 00:44:00.683 "unmap": true, 00:44:00.683 "flush": false, 00:44:00.683 "reset": true, 00:44:00.683 "nvme_admin": false, 00:44:00.683 "nvme_io": false, 00:44:00.683 "nvme_io_md": false, 00:44:00.683 "write_zeroes": true, 00:44:00.683 "zcopy": false, 00:44:00.683 "get_zone_info": false, 00:44:00.683 "zone_management": false, 00:44:00.683 "zone_append": false, 00:44:00.683 "compare": false, 00:44:00.683 "compare_and_write": false, 00:44:00.683 "abort": false, 00:44:00.683 "seek_hole": true, 00:44:00.683 "seek_data": true, 00:44:00.683 "copy": false, 00:44:00.683 "nvme_iov_md": false 00:44:00.683 }, 00:44:00.683 "driver_specific": { 00:44:00.683 "lvol": { 00:44:00.683 "lvol_store_uuid": "88444982-99a3-40a1-83cb-a54ef51c4bce", 00:44:00.684 "base_bdev": "aio_bdev", 00:44:00.684 "thin_provision": false, 00:44:00.684 "num_allocated_clusters": 38, 00:44:00.684 "snapshot": false, 00:44:00.684 "clone": false, 00:44:00.684 "esnap_clone": false 00:44:00.684 } 00:44:00.684 } 00:44:00.684 } 00:44:00.684 ] 00:44:00.684 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:44:00.684 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88444982-99a3-40a1-83cb-a54ef51c4bce 00:44:00.684 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:44:00.944 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:44:00.944 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 88444982-99a3-40a1-83cb-a54ef51c4bce 00:44:00.944 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:44:01.204 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:44:01.204 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 09aada85-0c02-41cb-b2b1-8cb1d90bdd7f 00:44:01.204 14:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 88444982-99a3-40a1-83cb-a54ef51c4bce 00:44:01.464 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:44:01.725 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:44:01.725 00:44:01.725 real 0m16.147s 00:44:01.725 user 0m15.727s 00:44:01.725 sys 0m1.448s 00:44:01.725 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:01.725 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:44:01.725 ************************************ 00:44:01.725 END TEST lvs_grow_clean 00:44:01.725 ************************************ 00:44:01.725 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:44:01.725 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:44:01.725 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:01.725 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:44:01.725 ************************************ 00:44:01.725 START TEST lvs_grow_dirty 00:44:01.725 ************************************ 00:44:01.725 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:44:01.725 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:44:01.725 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:44:01.725 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:44:01.725 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:44:01.725 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:44:01.725 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:44:01.725 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:44:01.725 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:44:01.725 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:44:01.985 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:44:01.985 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:44:02.245 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=84e96e75-0d1c-4912-9107-8a96e334a6e0 00:44:02.245 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84e96e75-0d1c-4912-9107-8a96e334a6e0 00:44:02.246 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:44:02.246 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:44:02.246 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:44:02.246 14:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 84e96e75-0d1c-4912-9107-8a96e334a6e0 lvol 150 00:44:02.505 14:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9d6f6ccb-07b7-477d-b60c-c71ae68d1ba3 00:44:02.505 14:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:44:02.505 14:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:44:02.765 [2024-10-07 14:54:26.243206] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:44:02.765 [2024-10-07 14:54:26.243389] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:44:02.765 true 00:44:02.765 14:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:44:02.765 14:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84e96e75-0d1c-4912-9107-8a96e334a6e0 00:44:02.765 14:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:44:02.765 14:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:44:03.026 14:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9d6f6ccb-07b7-477d-b60c-c71ae68d1ba3 00:44:03.287 14:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:03.287 [2024-10-07 14:54:26.915390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:03.287 14:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:44:03.548 14:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3353838 00:44:03.548 14:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:03.548 14:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:44:03.548 14:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3353838 /var/tmp/bdevperf.sock 00:44:03.548 14:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3353838 ']' 00:44:03.548 14:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:03.548 14:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:03.548 14:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:03.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:03.548 14:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:03.548 14:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:44:03.548 [2024-10-07 14:54:27.179925] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:44:03.548 [2024-10-07 14:54:27.180067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353838 ] 00:44:03.809 [2024-10-07 14:54:27.314220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:03.809 [2024-10-07 14:54:27.456478] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:44:04.380 14:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:04.380 14:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:44:04.380 14:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:44:04.641 Nvme0n1 00:44:04.641 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:44:04.902 [ 00:44:04.902 { 00:44:04.902 "name": "Nvme0n1", 00:44:04.902 "aliases": [ 00:44:04.902 "9d6f6ccb-07b7-477d-b60c-c71ae68d1ba3" 00:44:04.902 ], 00:44:04.902 "product_name": "NVMe disk", 00:44:04.902 "block_size": 4096, 00:44:04.902 "num_blocks": 38912, 00:44:04.902 "uuid": "9d6f6ccb-07b7-477d-b60c-c71ae68d1ba3", 00:44:04.902 "numa_id": 0, 00:44:04.902 "assigned_rate_limits": { 00:44:04.902 "rw_ios_per_sec": 0, 00:44:04.902 "rw_mbytes_per_sec": 0, 00:44:04.902 "r_mbytes_per_sec": 0, 00:44:04.902 "w_mbytes_per_sec": 0 00:44:04.902 }, 00:44:04.902 "claimed": false, 00:44:04.902 "zoned": false, 00:44:04.902 "supported_io_types": { 00:44:04.902 "read": true, 00:44:04.902 "write": true, 00:44:04.902 "unmap": true, 00:44:04.902 "flush": true, 00:44:04.902 "reset": true, 00:44:04.902 "nvme_admin": true, 00:44:04.902 "nvme_io": true, 00:44:04.902 "nvme_io_md": false, 00:44:04.902 "write_zeroes": true, 00:44:04.902 "zcopy": false, 00:44:04.902 "get_zone_info": false, 00:44:04.902 "zone_management": false, 00:44:04.902 "zone_append": false, 00:44:04.902 "compare": true, 00:44:04.902 "compare_and_write": true, 00:44:04.902 "abort": true, 00:44:04.902 "seek_hole": false, 00:44:04.902 "seek_data": false, 00:44:04.902 "copy": true, 00:44:04.902 "nvme_iov_md": false 00:44:04.902 }, 00:44:04.902 "memory_domains": [ 00:44:04.902 { 00:44:04.902 "dma_device_id": "system", 00:44:04.902 "dma_device_type": 1 00:44:04.902 } 00:44:04.902 ], 00:44:04.902 "driver_specific": { 00:44:04.902 "nvme": [ 00:44:04.902 { 00:44:04.902 "trid": { 00:44:04.902 "trtype": "TCP", 00:44:04.902 "adrfam": "IPv4", 00:44:04.902 "traddr": "10.0.0.2", 00:44:04.902 "trsvcid": "4420", 00:44:04.902 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:44:04.902 }, 00:44:04.902 "ctrlr_data": { 00:44:04.902 "cntlid": 1, 00:44:04.902 "vendor_id": "0x8086", 00:44:04.902 "model_number": "SPDK bdev Controller", 00:44:04.902 "serial_number": "SPDK0", 00:44:04.902 "firmware_revision": "25.01", 00:44:04.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:04.902 "oacs": { 00:44:04.902 "security": 0, 00:44:04.902 "format": 0, 00:44:04.902 "firmware": 0, 00:44:04.902 "ns_manage": 0 00:44:04.902 }, 00:44:04.902 "multi_ctrlr": true, 00:44:04.902 "ana_reporting": false 00:44:04.902 }, 00:44:04.902 "vs": { 00:44:04.902 "nvme_version": "1.3" 00:44:04.902 }, 00:44:04.902 "ns_data": { 00:44:04.902 "id": 1, 00:44:04.902 "can_share": true 00:44:04.902 } 00:44:04.902 } 00:44:04.902 ], 00:44:04.902 "mp_policy": "active_passive" 00:44:04.902 } 00:44:04.902 } 00:44:04.902 ] 00:44:04.902 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3354016 00:44:04.902 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:44:04.902 14:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:44:04.902 Running I/O for 10 seconds... 00:44:06.285 Latency(us) 00:44:06.285 [2024-10-07T12:54:29.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:06.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:06.285 Nvme0n1 : 1.00 15938.00 62.26 0.00 0.00 0.00 0.00 0.00 00:44:06.285 [2024-10-07T12:54:29.994Z] =================================================================================================================== 00:44:06.285 [2024-10-07T12:54:29.994Z] Total : 15938.00 62.26 0.00 0.00 0.00 0.00 0.00 00:44:06.285 00:44:06.856 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 84e96e75-0d1c-4912-9107-8a96e334a6e0 00:44:07.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:07.116 Nvme0n1 : 2.00 16054.50 62.71 0.00 0.00 0.00 0.00 0.00 00:44:07.116 [2024-10-07T12:54:30.825Z] =================================================================================================================== 00:44:07.116 [2024-10-07T12:54:30.825Z] Total : 16054.50 62.71 0.00 0.00 0.00 0.00 0.00 00:44:07.116 00:44:07.116 true 00:44:07.116 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84e96e75-0d1c-4912-9107-8a96e334a6e0 00:44:07.116 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:44:07.376 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:44:07.376 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:44:07.376 14:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3354016 00:44:07.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:07.947 Nvme0n1 : 3.00 16078.00 62.80 0.00 0.00 0.00 0.00 0.00 00:44:07.947 [2024-10-07T12:54:31.656Z] =================================================================================================================== 00:44:07.947 [2024-10-07T12:54:31.656Z] Total : 16078.00 62.80 0.00 0.00 0.00 0.00 0.00 00:44:07.947 00:44:09.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:09.331 Nvme0n1 : 4.00 16137.00 63.04 0.00 0.00 0.00 0.00 0.00 00:44:09.331 [2024-10-07T12:54:33.040Z] =================================================================================================================== 00:44:09.331 [2024-10-07T12:54:33.040Z] Total : 16137.00 63.04 0.00 0.00 0.00 0.00 0.00 00:44:09.331 00:44:09.902 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:09.902 Nvme0n1 : 5.00 16147.20 63.08 0.00 0.00 0.00 0.00 0.00 00:44:09.902 [2024-10-07T12:54:33.611Z] =================================================================================================================== 00:44:09.902 [2024-10-07T12:54:33.611Z] Total : 16147.20 63.08 0.00 0.00 0.00 0.00 0.00 00:44:09.902 00:44:11.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:11.285 Nvme0n1 : 6.00 16171.33 63.17 0.00 0.00 0.00 0.00 0.00 00:44:11.285 [2024-10-07T12:54:34.994Z] =================================================================================================================== 00:44:11.285 [2024-10-07T12:54:34.994Z] Total : 16171.33 63.17 0.00 0.00 0.00 0.00 0.00 00:44:11.285 00:44:12.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:12.226 Nvme0n1 : 7.00 16191.57 63.25 0.00 0.00 0.00 0.00 0.00 00:44:12.226 [2024-10-07T12:54:35.935Z] =================================================================================================================== 00:44:12.226 [2024-10-07T12:54:35.935Z] Total : 16191.57 63.25 0.00 0.00 0.00 0.00 0.00 00:44:12.226 00:44:13.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:13.167 Nvme0n1 : 8.00 16207.62 63.31 0.00 0.00 0.00 0.00 0.00 00:44:13.167 [2024-10-07T12:54:36.876Z] =================================================================================================================== 00:44:13.167 [2024-10-07T12:54:36.876Z] Total : 16207.62 63.31 0.00 0.00 0.00 0.00 0.00 00:44:13.167 00:44:14.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:14.108 Nvme0n1 : 9.00 16220.00 63.36 0.00 0.00 0.00 0.00 0.00 00:44:14.108 [2024-10-07T12:54:37.817Z] =================================================================================================================== 00:44:14.108 [2024-10-07T12:54:37.817Z] Total : 16220.00 63.36 0.00 0.00 0.00 0.00 0.00 00:44:14.108 00:44:15.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:15.048 Nvme0n1 : 10.00 16223.40 63.37 0.00 0.00 0.00 0.00 0.00 00:44:15.048 [2024-10-07T12:54:38.757Z] =================================================================================================================== 00:44:15.048 [2024-10-07T12:54:38.757Z] Total : 16223.40 63.37 0.00 0.00 0.00 0.00 0.00 00:44:15.048 00:44:15.048 00:44:15.048 Latency(us) 00:44:15.048 [2024-10-07T12:54:38.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:15.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:44:15.048 Nvme0n1 : 10.01 16225.73 63.38 0.00 0.00 7884.08 1802.24 14636.37 00:44:15.048 [2024-10-07T12:54:38.757Z] =================================================================================================================== 00:44:15.048 [2024-10-07T12:54:38.757Z] Total : 16225.73 63.38 0.00 0.00 7884.08 1802.24 14636.37 00:44:15.048 { 00:44:15.048 "results": [ 00:44:15.048 { 00:44:15.048 "job": "Nvme0n1", 00:44:15.048 "core_mask": "0x2", 00:44:15.048 "workload": "randwrite", 00:44:15.048 "status": "finished", 00:44:15.048 "queue_depth": 128, 00:44:15.048 "io_size": 4096, 00:44:15.048 "runtime": 10.006451, 00:44:15.048 "iops": 16225.73277978376, 00:44:15.048 "mibps": 63.381768671030315, 00:44:15.048 "io_failed": 0, 00:44:15.048 "io_timeout": 0, 00:44:15.048 "avg_latency_us": 7884.076431020394, 00:44:15.048 "min_latency_us": 1802.24, 00:44:15.048 "max_latency_us": 14636.373333333333 00:44:15.048 } 00:44:15.048 ], 00:44:15.048 "core_count": 1 00:44:15.048 } 00:44:15.048 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3353838 00:44:15.049 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3353838 ']' 00:44:15.049 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3353838 00:44:15.049 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:44:15.049 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:15.049 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3353838 00:44:15.049 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:44:15.049 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:44:15.049 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3353838' 00:44:15.049 killing process with pid 3353838 00:44:15.049 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3353838 00:44:15.049 Received shutdown signal, test time was about 10.000000 seconds 00:44:15.049 00:44:15.049 Latency(us) 00:44:15.049 [2024-10-07T12:54:38.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:15.049 [2024-10-07T12:54:38.758Z] =================================================================================================================== 00:44:15.049 [2024-10-07T12:54:38.758Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:15.049 14:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3353838 00:44:15.620 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:44:15.880 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:16.139 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84e96e75-0d1c-4912-9107-8a96e334a6e0 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3349734 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3349734 00:44:16.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3349734 Killed "${NVMF_APP[@]}" "$@" 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=3356208 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 3356208 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3356208 ']' 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:16.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:16.140 14:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:44:16.400 [2024-10-07 14:54:39.924161] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:16.400 [2024-10-07 14:54:39.926514] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:44:16.400 [2024-10-07 14:54:39.926617] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:16.400 [2024-10-07 14:54:40.069852] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:16.661 [2024-10-07 14:54:40.254081] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:16.661 [2024-10-07 14:54:40.254130] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:16.661 [2024-10-07 14:54:40.254143] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:16.661 [2024-10-07 14:54:40.254153] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:16.661 [2024-10-07 14:54:40.254164] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:16.661 [2024-10-07 14:54:40.255364] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:44:16.920 [2024-10-07 14:54:40.490976] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:16.920 [2024-10-07 14:54:40.491289] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:17.181 14:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:17.181 14:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:44:17.181 14:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:44:17.181 14:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:17.181 14:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:44:17.181 14:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:17.181 14:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:44:17.181 [2024-10-07 14:54:40.884313] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:44:17.181 [2024-10-07 14:54:40.884729] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:44:17.181 [2024-10-07 14:54:40.884857] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:44:17.441 14:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:44:17.441 14:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9d6f6ccb-07b7-477d-b60c-c71ae68d1ba3 00:44:17.441 14:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=9d6f6ccb-07b7-477d-b60c-c71ae68d1ba3 00:44:17.441 14:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:44:17.441 14:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:44:17.441 14:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:44:17.441 14:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:44:17.441 14:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:44:17.441 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9d6f6ccb-07b7-477d-b60c-c71ae68d1ba3 -t 2000 00:44:17.702 [ 00:44:17.702 { 00:44:17.702 "name": "9d6f6ccb-07b7-477d-b60c-c71ae68d1ba3", 00:44:17.702 "aliases": [ 00:44:17.702 "lvs/lvol" 00:44:17.702 ], 00:44:17.702 "product_name": "Logical Volume", 00:44:17.702 "block_size": 4096, 00:44:17.702 "num_blocks": 38912, 00:44:17.702 "uuid": "9d6f6ccb-07b7-477d-b60c-c71ae68d1ba3", 00:44:17.702 "assigned_rate_limits": { 00:44:17.702 "rw_ios_per_sec": 0, 00:44:17.702 "rw_mbytes_per_sec": 0, 00:44:17.702 "r_mbytes_per_sec": 0, 00:44:17.703 "w_mbytes_per_sec": 0 00:44:17.703 }, 00:44:17.703 "claimed": false, 00:44:17.703 "zoned": false, 00:44:17.703 "supported_io_types": { 00:44:17.703 "read": true, 00:44:17.703 "write": true, 00:44:17.703 "unmap": true, 00:44:17.703 "flush": false, 00:44:17.703 "reset": true, 00:44:17.703 "nvme_admin": false, 00:44:17.703 "nvme_io": false, 00:44:17.703 "nvme_io_md": false, 00:44:17.703 "write_zeroes": true, 00:44:17.703 "zcopy": false, 00:44:17.703 "get_zone_info": false, 00:44:17.703 "zone_management": false, 00:44:17.703 "zone_append": false, 00:44:17.703 "compare": false, 00:44:17.703 "compare_and_write": false, 00:44:17.703 "abort": false, 00:44:17.703 "seek_hole": true, 00:44:17.703 "seek_data": true, 00:44:17.703 "copy": false, 00:44:17.703 "nvme_iov_md": false 00:44:17.703 }, 00:44:17.703 "driver_specific": { 00:44:17.703 "lvol": { 00:44:17.703 "lvol_store_uuid": "84e96e75-0d1c-4912-9107-8a96e334a6e0", 00:44:17.703 "base_bdev": "aio_bdev", 00:44:17.703 "thin_provision": false, 00:44:17.703 "num_allocated_clusters": 38, 00:44:17.703 "snapshot": false, 00:44:17.703 "clone": false, 00:44:17.703 "esnap_clone": false 00:44:17.703 } 00:44:17.703 } 00:44:17.703 } 00:44:17.703 ] 00:44:17.703 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:44:17.703 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84e96e75-0d1c-4912-9107-8a96e334a6e0 00:44:17.703 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:44:17.703 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:44:17.703 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84e96e75-0d1c-4912-9107-8a96e334a6e0 00:44:17.703 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:44:17.963 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:44:17.963 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:44:18.224 [2024-10-07 14:54:41.700310] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:44:18.224 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84e96e75-0d1c-4912-9107-8a96e334a6e0 00:44:18.224 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:44:18.224 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84e96e75-0d1c-4912-9107-8a96e334a6e0 00:44:18.224 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:18.224 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:18.224 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:18.224 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:18.224 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:18.224 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:44:18.224 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:44:18.224 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:44:18.224 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84e96e75-0d1c-4912-9107-8a96e334a6e0 00:44:18.224 request: 00:44:18.224 { 00:44:18.224 "uuid": "84e96e75-0d1c-4912-9107-8a96e334a6e0", 00:44:18.224 "method": "bdev_lvol_get_lvstores", 00:44:18.224 "req_id": 1 00:44:18.224 } 00:44:18.224 Got JSON-RPC error response 00:44:18.224 response: 00:44:18.224 { 00:44:18.224 "code": -19, 00:44:18.224 "message": "No such device" 00:44:18.224 } 00:44:18.485 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:44:18.485 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:44:18.485 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:44:18.485 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:44:18.485 14:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:44:18.485 aio_bdev 00:44:18.485 14:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9d6f6ccb-07b7-477d-b60c-c71ae68d1ba3 00:44:18.485 14:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=9d6f6ccb-07b7-477d-b60c-c71ae68d1ba3 00:44:18.485 14:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:44:18.485 14:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:44:18.485 14:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:44:18.485 14:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:44:18.485 14:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:44:18.746 14:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9d6f6ccb-07b7-477d-b60c-c71ae68d1ba3 -t 2000 00:44:18.746 [ 00:44:18.746 { 00:44:18.746 "name": "9d6f6ccb-07b7-477d-b60c-c71ae68d1ba3", 00:44:18.746 "aliases": [ 00:44:18.746 "lvs/lvol" 00:44:18.746 ], 00:44:18.746 "product_name": "Logical Volume", 00:44:18.746 "block_size": 4096, 00:44:18.746 "num_blocks": 38912, 00:44:18.746 "uuid": "9d6f6ccb-07b7-477d-b60c-c71ae68d1ba3", 00:44:18.746 "assigned_rate_limits": { 00:44:18.746 "rw_ios_per_sec": 0, 00:44:18.746 "rw_mbytes_per_sec": 0, 00:44:18.746 "r_mbytes_per_sec": 0, 00:44:18.746 "w_mbytes_per_sec": 0 00:44:18.746 }, 00:44:18.746 "claimed": false, 00:44:18.746 "zoned": false, 00:44:18.746 "supported_io_types": { 00:44:18.746 "read": true, 00:44:18.746 "write": true, 00:44:18.746 "unmap": true, 00:44:18.746 "flush": false, 00:44:18.746 "reset": true, 00:44:18.746 "nvme_admin": false, 00:44:18.746 "nvme_io": false, 00:44:18.746 "nvme_io_md": false, 00:44:18.746 "write_zeroes": true, 00:44:18.746 "zcopy": false, 00:44:18.746 "get_zone_info": false, 00:44:18.746 "zone_management": false, 00:44:18.746 "zone_append": false, 00:44:18.746 "compare": false, 00:44:18.746 "compare_and_write": false, 00:44:18.746 "abort": false, 00:44:18.746 "seek_hole": true, 00:44:18.746 "seek_data": true, 00:44:18.746 "copy": false, 00:44:18.746 "nvme_iov_md": false 00:44:18.746 }, 00:44:18.746 "driver_specific": { 00:44:18.746 "lvol": { 00:44:18.746 "lvol_store_uuid": "84e96e75-0d1c-4912-9107-8a96e334a6e0", 00:44:18.746 "base_bdev": "aio_bdev", 00:44:18.746 "thin_provision": false, 00:44:18.746 "num_allocated_clusters": 38, 00:44:18.746 "snapshot": false, 00:44:18.746 "clone": false, 00:44:18.746 "esnap_clone": false 00:44:18.746 } 00:44:18.746 } 00:44:18.746 } 00:44:18.746 ] 00:44:18.746 14:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:44:18.746 14:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84e96e75-0d1c-4912-9107-8a96e334a6e0 00:44:18.746 14:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:44:19.007 14:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:44:19.007 14:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 84e96e75-0d1c-4912-9107-8a96e334a6e0 00:44:19.007 14:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:44:19.267 14:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:44:19.267 14:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9d6f6ccb-07b7-477d-b60c-c71ae68d1ba3 00:44:19.267 14:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 84e96e75-0d1c-4912-9107-8a96e334a6e0 00:44:19.529 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:44:19.791 00:44:19.791 real 0m17.911s 00:44:19.791 user 0m35.965s 00:44:19.791 sys 0m3.089s 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:44:19.791 ************************************ 00:44:19.791 END TEST lvs_grow_dirty 00:44:19.791 ************************************ 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:44:19.791 nvmf_trace.0 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:19.791 rmmod nvme_tcp 00:44:19.791 rmmod nvme_fabrics 00:44:19.791 rmmod nvme_keyring 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 3356208 ']' 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 3356208 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3356208 ']' 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3356208 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:19.791 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3356208 00:44:20.052 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:20.052 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:20.052 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3356208' 00:44:20.052 killing process with pid 3356208 00:44:20.052 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3356208 00:44:20.052 14:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3356208 00:44:20.993 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:44:20.993 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:44:20.993 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:44:20.993 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:44:20.993 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:44:20.993 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:44:20.993 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:44:20.993 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:20.993 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:20.993 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:20.993 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:20.993 14:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:22.904 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:22.904 00:44:22.904 real 0m45.780s 00:44:22.904 user 0m55.661s 00:44:22.904 sys 0m10.369s 00:44:22.904 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:22.904 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:44:22.904 ************************************ 00:44:22.904 END TEST nvmf_lvs_grow 00:44:22.904 ************************************ 00:44:22.904 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:44:22.904 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:44:22.904 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:22.904 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:22.904 ************************************ 00:44:22.904 START TEST nvmf_bdev_io_wait 00:44:22.904 ************************************ 00:44:22.904 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:44:23.166 * Looking for test storage... 00:44:23.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:44:23.166 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:23.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:23.167 --rc genhtml_branch_coverage=1 00:44:23.167 --rc genhtml_function_coverage=1 00:44:23.167 --rc genhtml_legend=1 00:44:23.167 --rc geninfo_all_blocks=1 00:44:23.167 --rc geninfo_unexecuted_blocks=1 00:44:23.167 00:44:23.167 ' 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:23.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:23.167 --rc genhtml_branch_coverage=1 00:44:23.167 --rc genhtml_function_coverage=1 00:44:23.167 --rc genhtml_legend=1 00:44:23.167 --rc geninfo_all_blocks=1 00:44:23.167 --rc geninfo_unexecuted_blocks=1 00:44:23.167 00:44:23.167 ' 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:23.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:23.167 --rc genhtml_branch_coverage=1 00:44:23.167 --rc genhtml_function_coverage=1 00:44:23.167 --rc genhtml_legend=1 00:44:23.167 --rc geninfo_all_blocks=1 00:44:23.167 --rc geninfo_unexecuted_blocks=1 00:44:23.167 00:44:23.167 ' 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:23.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:23.167 --rc genhtml_branch_coverage=1 00:44:23.167 --rc genhtml_function_coverage=1 00:44:23.167 --rc genhtml_legend=1 00:44:23.167 --rc geninfo_all_blocks=1 00:44:23.167 --rc geninfo_unexecuted_blocks=1 00:44:23.167 00:44:23.167 ' 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:44:23.167 14:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:31.307 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:31.307 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:44:31.307 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:31.307 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:31.307 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:31.308 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:31.308 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:31.308 Found net devices under 0000:31:00.0: cvl_0_0 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:31.308 Found net devices under 0000:31:00.1: cvl_0_1 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:31.308 14:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:31.308 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:31.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:31.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:44:31.309 00:44:31.309 --- 10.0.0.2 ping statistics --- 00:44:31.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:31.309 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:31.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:31.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:44:31.309 00:44:31.309 --- 10.0.0.1 ping statistics --- 00:44:31.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:31.309 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=3361198 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 3361198 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3361198 ']' 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:31.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:31.309 [2024-10-07 14:54:54.175865] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:31.309 [2024-10-07 14:54:54.178522] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:44:31.309 [2024-10-07 14:54:54.178619] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:31.309 [2024-10-07 14:54:54.305035] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:31.309 [2024-10-07 14:54:54.487792] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:31.309 [2024-10-07 14:54:54.487839] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:31.309 [2024-10-07 14:54:54.487852] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:31.309 [2024-10-07 14:54:54.487862] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:31.309 [2024-10-07 14:54:54.487873] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:31.309 [2024-10-07 14:54:54.490068] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:44:31.309 [2024-10-07 14:54:54.490120] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:44:31.309 [2024-10-07 14:54:54.490244] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:44:31.309 [2024-10-07 14:54:54.490271] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:44:31.309 [2024-10-07 14:54:54.490732] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:31.309 14:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:31.570 [2024-10-07 14:54:55.142442] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:31.570 [2024-10-07 14:54:55.142541] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:44:31.570 [2024-10-07 14:54:55.143985] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:44:31.570 [2024-10-07 14:54:55.144088] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:31.570 [2024-10-07 14:54:55.154999] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:31.570 Malloc0 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:31.570 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:31.830 [2024-10-07 14:54:55.279215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:31.830 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:31.830 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3361489 00:44:31.830 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3361491 00:44:31.830 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:44:31.830 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:44:31.830 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:44:31.830 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:44:31.830 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:31.830 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:31.830 { 00:44:31.830 "params": { 00:44:31.830 "name": "Nvme$subsystem", 00:44:31.830 "trtype": "$TEST_TRANSPORT", 00:44:31.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:31.830 "adrfam": "ipv4", 00:44:31.830 "trsvcid": "$NVMF_PORT", 00:44:31.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:31.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:31.830 "hdgst": ${hdgst:-false}, 00:44:31.830 "ddgst": ${ddgst:-false} 00:44:31.830 }, 00:44:31.830 "method": "bdev_nvme_attach_controller" 00:44:31.830 } 00:44:31.830 EOF 00:44:31.830 )") 00:44:31.830 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3361493 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3361496 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:31.831 { 00:44:31.831 "params": { 00:44:31.831 "name": "Nvme$subsystem", 00:44:31.831 "trtype": "$TEST_TRANSPORT", 00:44:31.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:31.831 "adrfam": "ipv4", 00:44:31.831 "trsvcid": "$NVMF_PORT", 00:44:31.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:31.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:31.831 "hdgst": ${hdgst:-false}, 00:44:31.831 "ddgst": ${ddgst:-false} 00:44:31.831 }, 00:44:31.831 "method": "bdev_nvme_attach_controller" 00:44:31.831 } 00:44:31.831 EOF 00:44:31.831 )") 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:31.831 { 00:44:31.831 "params": { 00:44:31.831 "name": "Nvme$subsystem", 00:44:31.831 "trtype": "$TEST_TRANSPORT", 00:44:31.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:31.831 "adrfam": "ipv4", 00:44:31.831 "trsvcid": "$NVMF_PORT", 00:44:31.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:31.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:31.831 "hdgst": ${hdgst:-false}, 00:44:31.831 "ddgst": ${ddgst:-false} 00:44:31.831 }, 00:44:31.831 "method": "bdev_nvme_attach_controller" 00:44:31.831 } 00:44:31.831 EOF 00:44:31.831 )") 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:44:31.831 { 00:44:31.831 "params": { 00:44:31.831 "name": "Nvme$subsystem", 00:44:31.831 "trtype": "$TEST_TRANSPORT", 00:44:31.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:31.831 "adrfam": "ipv4", 00:44:31.831 "trsvcid": "$NVMF_PORT", 00:44:31.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:31.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:31.831 "hdgst": ${hdgst:-false}, 00:44:31.831 "ddgst": ${ddgst:-false} 00:44:31.831 }, 00:44:31.831 "method": "bdev_nvme_attach_controller" 00:44:31.831 } 00:44:31.831 EOF 00:44:31.831 )") 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3361489 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:31.831 "params": { 00:44:31.831 "name": "Nvme1", 00:44:31.831 "trtype": "tcp", 00:44:31.831 "traddr": "10.0.0.2", 00:44:31.831 "adrfam": "ipv4", 00:44:31.831 "trsvcid": "4420", 00:44:31.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:31.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:31.831 "hdgst": false, 00:44:31.831 "ddgst": false 00:44:31.831 }, 00:44:31.831 "method": "bdev_nvme_attach_controller" 00:44:31.831 }' 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:31.831 "params": { 00:44:31.831 "name": "Nvme1", 00:44:31.831 "trtype": "tcp", 00:44:31.831 "traddr": "10.0.0.2", 00:44:31.831 "adrfam": "ipv4", 00:44:31.831 "trsvcid": "4420", 00:44:31.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:31.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:31.831 "hdgst": false, 00:44:31.831 "ddgst": false 00:44:31.831 }, 00:44:31.831 "method": "bdev_nvme_attach_controller" 00:44:31.831 }' 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:31.831 "params": { 00:44:31.831 "name": "Nvme1", 00:44:31.831 "trtype": "tcp", 00:44:31.831 "traddr": "10.0.0.2", 00:44:31.831 "adrfam": "ipv4", 00:44:31.831 "trsvcid": "4420", 00:44:31.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:31.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:31.831 "hdgst": false, 00:44:31.831 "ddgst": false 00:44:31.831 }, 00:44:31.831 "method": "bdev_nvme_attach_controller" 00:44:31.831 }' 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:44:31.831 14:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:44:31.831 "params": { 00:44:31.831 "name": "Nvme1", 00:44:31.831 "trtype": "tcp", 00:44:31.831 "traddr": "10.0.0.2", 00:44:31.831 "adrfam": "ipv4", 00:44:31.831 "trsvcid": "4420", 00:44:31.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:31.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:31.831 "hdgst": false, 00:44:31.831 "ddgst": false 00:44:31.831 }, 00:44:31.831 "method": "bdev_nvme_attach_controller" 00:44:31.831 }' 00:44:31.831 [2024-10-07 14:54:55.355203] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:44:31.831 [2024-10-07 14:54:55.355207] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:44:31.831 [2024-10-07 14:54:55.355291] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 [2024-10-07 14:54:55.355292] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:44:31.831 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:44:31.831 [2024-10-07 14:54:55.366014] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:44:31.831 [2024-10-07 14:54:55.366117] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:44:31.831 [2024-10-07 14:54:55.366631] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:44:31.831 [2024-10-07 14:54:55.366725] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:44:31.831 [2024-10-07 14:54:55.534962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:32.092 [2024-10-07 14:54:55.592954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:32.092 [2024-10-07 14:54:55.643747] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:32.092 [2024-10-07 14:54:55.694198] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:32.092 [2024-10-07 14:54:55.710041] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:44:32.092 [2024-10-07 14:54:55.770933] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:44:32.353 [2024-10-07 14:54:55.817234] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:44:32.353 [2024-10-07 14:54:55.869821] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:44:32.614 Running I/O for 1 seconds... 00:44:32.614 Running I/O for 1 seconds... 00:44:32.614 Running I/O for 1 seconds... 00:44:32.614 Running I/O for 1 seconds... 00:44:33.556 6956.00 IOPS, 27.17 MiB/s 00:44:33.556 Latency(us) 00:44:33.556 [2024-10-07T12:54:57.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:33.557 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:44:33.557 Nvme1n1 : 1.02 6976.92 27.25 0.00 0.00 18190.79 2280.11 28835.84 00:44:33.557 [2024-10-07T12:54:57.266Z] =================================================================================================================== 00:44:33.557 [2024-10-07T12:54:57.266Z] Total : 6976.92 27.25 0.00 0.00 18190.79 2280.11 28835.84 00:44:33.557 6692.00 IOPS, 26.14 MiB/s [2024-10-07T12:54:57.266Z] 18624.00 IOPS, 72.75 MiB/s 00:44:33.557 Latency(us) 00:44:33.557 [2024-10-07T12:54:57.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:33.557 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:44:33.557 Nvme1n1 : 1.01 6779.32 26.48 0.00 0.00 18817.19 5051.73 35607.89 00:44:33.557 [2024-10-07T12:54:57.266Z] =================================================================================================================== 00:44:33.557 [2024-10-07T12:54:57.266Z] Total : 6779.32 26.48 0.00 0.00 18817.19 5051.73 35607.89 00:44:33.557 00:44:33.557 Latency(us) 00:44:33.557 [2024-10-07T12:54:57.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:33.557 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:44:33.557 Nvme1n1 : 1.01 18681.21 72.97 0.00 0.00 6832.46 2771.63 12561.07 00:44:33.557 [2024-10-07T12:54:57.266Z] =================================================================================================================== 00:44:33.557 [2024-10-07T12:54:57.266Z] Total : 18681.21 72.97 0.00 0.00 6832.46 2771.63 12561.07 00:44:33.818 174448.00 IOPS, 681.44 MiB/s 00:44:33.818 Latency(us) 00:44:33.818 [2024-10-07T12:54:57.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:33.818 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:44:33.818 Nvme1n1 : 1.00 174081.38 680.01 0.00 0.00 731.34 334.51 2061.65 00:44:33.818 [2024-10-07T12:54:57.527Z] =================================================================================================================== 00:44:33.818 [2024-10-07T12:54:57.527Z] Total : 174081.38 680.01 0.00 0.00 731.34 334.51 2061.65 00:44:34.390 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3361491 00:44:34.651 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3361493 00:44:34.651 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3361496 00:44:34.651 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:34.651 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:34.651 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:34.651 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:34.651 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:44:34.651 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:44:34.651 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:44:34.651 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:44:34.651 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:34.651 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:44:34.651 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:34.651 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:34.651 rmmod nvme_tcp 00:44:34.651 rmmod nvme_fabrics 00:44:34.651 rmmod nvme_keyring 00:44:34.651 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:34.651 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:44:34.651 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:44:34.651 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 3361198 ']' 00:44:34.652 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 3361198 00:44:34.652 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3361198 ']' 00:44:34.652 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3361198 00:44:34.652 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:44:34.652 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:34.652 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3361198 00:44:34.652 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:34.652 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:34.652 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3361198' 00:44:34.652 killing process with pid 3361198 00:44:34.652 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3361198 00:44:34.652 14:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3361198 00:44:35.594 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:44:35.594 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:44:35.594 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:44:35.594 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:44:35.594 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:44:35.594 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:44:35.594 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:44:35.594 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:35.594 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:35.594 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:35.594 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:35.594 14:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:37.506 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:37.506 00:44:37.506 real 0m14.609s 00:44:37.506 user 0m24.221s 00:44:37.506 sys 0m7.979s 00:44:37.506 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:37.506 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:44:37.506 ************************************ 00:44:37.506 END TEST nvmf_bdev_io_wait 00:44:37.506 ************************************ 00:44:37.766 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:44:37.766 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:44:37.766 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:37.766 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:44:37.766 ************************************ 00:44:37.766 START TEST nvmf_queue_depth 00:44:37.766 ************************************ 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:44:37.767 * Looking for test storage... 00:44:37.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:44:37.767 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:44:38.028 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:38.028 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:44:38.028 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:44:38.028 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:44:38.028 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:44:38.028 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:38.028 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:44:38.028 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:44:38.028 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:38.028 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:38.028 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:44:38.028 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:44:38.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.029 --rc genhtml_branch_coverage=1 00:44:38.029 --rc genhtml_function_coverage=1 00:44:38.029 --rc genhtml_legend=1 00:44:38.029 --rc geninfo_all_blocks=1 00:44:38.029 --rc geninfo_unexecuted_blocks=1 00:44:38.029 00:44:38.029 ' 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:44:38.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.029 --rc genhtml_branch_coverage=1 00:44:38.029 --rc genhtml_function_coverage=1 00:44:38.029 --rc genhtml_legend=1 00:44:38.029 --rc geninfo_all_blocks=1 00:44:38.029 --rc geninfo_unexecuted_blocks=1 00:44:38.029 00:44:38.029 ' 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:44:38.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.029 --rc genhtml_branch_coverage=1 00:44:38.029 --rc genhtml_function_coverage=1 00:44:38.029 --rc genhtml_legend=1 00:44:38.029 --rc geninfo_all_blocks=1 00:44:38.029 --rc geninfo_unexecuted_blocks=1 00:44:38.029 00:44:38.029 ' 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:44:38.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:38.029 --rc genhtml_branch_coverage=1 00:44:38.029 --rc genhtml_function_coverage=1 00:44:38.029 --rc genhtml_legend=1 00:44:38.029 --rc geninfo_all_blocks=1 00:44:38.029 --rc geninfo_unexecuted_blocks=1 00:44:38.029 00:44:38.029 ' 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:44:38.029 14:55:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:44:46.170 Found 0000:31:00.0 (0x8086 - 0x159b) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:44:46.170 Found 0000:31:00.1 (0x8086 - 0x159b) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:44:46.170 Found net devices under 0000:31:00.0: cvl_0_0 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:44:46.170 Found net devices under 0000:31:00.1: cvl_0_1 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:46.170 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:46.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:46.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:44:46.171 00:44:46.171 --- 10.0.0.2 ping statistics --- 00:44:46.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:46.171 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:46.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:46.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:44:46.171 00:44:46.171 --- 10.0.0.1 ping statistics --- 00:44:46.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:46.171 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:46.171 14:55:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:46.171 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=3366452 00:44:46.171 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 3366452 00:44:46.171 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:44:46.171 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3366452 ']' 00:44:46.171 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:46.171 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:46.171 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:46.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:46.171 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:46.171 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:46.171 [2024-10-07 14:55:09.099366] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:44:46.171 [2024-10-07 14:55:09.101998] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:44:46.171 [2024-10-07 14:55:09.102113] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:46.171 [2024-10-07 14:55:09.262900] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:46.171 [2024-10-07 14:55:09.488043] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:46.171 [2024-10-07 14:55:09.488113] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:46.171 [2024-10-07 14:55:09.488129] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:46.171 [2024-10-07 14:55:09.488141] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:46.171 [2024-10-07 14:55:09.488153] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:46.171 [2024-10-07 14:55:09.489608] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:44:46.171 [2024-10-07 14:55:09.761953] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:44:46.171 [2024-10-07 14:55:09.762321] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:44:46.171 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:46.171 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:44:46.171 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:44:46.171 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:46.171 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:46.432 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:46.432 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:46.432 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:46.432 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:46.432 [2024-10-07 14:55:09.914849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:46.432 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:46.432 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:46.432 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:46.432 14:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:46.432 Malloc0 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:46.432 [2024-10-07 14:55:10.042766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3366604 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3366604 /var/tmp/bdevperf.sock 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3366604 ']' 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:46.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:46.432 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:46.432 [2024-10-07 14:55:10.125290] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:44:46.432 [2024-10-07 14:55:10.125377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3366604 ] 00:44:46.692 [2024-10-07 14:55:10.228076] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:46.951 [2024-10-07 14:55:10.405208] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:44:47.211 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:47.212 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:44:47.212 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:44:47.212 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:47.212 14:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:44:47.472 NVMe0n1 00:44:47.472 14:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:47.472 14:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:44:47.472 Running I/O for 10 seconds... 00:44:49.795 8038.00 IOPS, 31.40 MiB/s [2024-10-07T12:55:14.457Z] 8157.00 IOPS, 31.86 MiB/s [2024-10-07T12:55:15.399Z] 8874.00 IOPS, 34.66 MiB/s [2024-10-07T12:55:16.338Z] 9336.25 IOPS, 36.47 MiB/s [2024-10-07T12:55:17.279Z] 9633.60 IOPS, 37.63 MiB/s [2024-10-07T12:55:18.218Z] 9892.50 IOPS, 38.64 MiB/s [2024-10-07T12:55:19.158Z] 10007.71 IOPS, 39.09 MiB/s [2024-10-07T12:55:20.539Z] 10119.38 IOPS, 39.53 MiB/s [2024-10-07T12:55:21.163Z] 10222.44 IOPS, 39.93 MiB/s [2024-10-07T12:55:21.436Z] 10274.60 IOPS, 40.14 MiB/s 00:44:57.727 Latency(us) 00:44:57.727 [2024-10-07T12:55:21.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:57.727 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:44:57.727 Verification LBA range: start 0x0 length 0x4000 00:44:57.727 NVMe0n1 : 10.05 10308.44 40.27 0.00 0.00 98956.00 6772.05 75147.95 00:44:57.727 [2024-10-07T12:55:21.436Z] =================================================================================================================== 00:44:57.727 [2024-10-07T12:55:21.436Z] Total : 10308.44 40.27 0.00 0.00 98956.00 6772.05 75147.95 00:44:57.727 { 00:44:57.727 "results": [ 00:44:57.727 { 00:44:57.727 "job": "NVMe0n1", 00:44:57.727 "core_mask": "0x1", 00:44:57.727 "workload": "verify", 00:44:57.727 "status": "finished", 00:44:57.727 "verify_range": { 00:44:57.727 "start": 0, 00:44:57.727 "length": 16384 00:44:57.727 }, 00:44:57.727 "queue_depth": 1024, 00:44:57.727 "io_size": 4096, 00:44:57.727 "runtime": 10.049731, 00:44:57.727 "iops": 10308.435121298271, 00:44:57.727 "mibps": 40.26732469257137, 00:44:57.727 "io_failed": 0, 00:44:57.727 "io_timeout": 0, 00:44:57.727 "avg_latency_us": 98955.99585830992, 00:44:57.727 "min_latency_us": 6772.053333333333, 00:44:57.727 "max_latency_us": 75147.94666666667 00:44:57.727 } 00:44:57.727 ], 00:44:57.727 "core_count": 1 00:44:57.727 } 00:44:57.727 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3366604 00:44:57.727 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3366604 ']' 00:44:57.727 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3366604 00:44:57.727 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:44:57.727 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:57.727 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3366604 00:44:57.727 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:44:57.727 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:44:57.727 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3366604' 00:44:57.727 killing process with pid 3366604 00:44:57.727 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3366604 00:44:57.727 Received shutdown signal, test time was about 10.000000 seconds 00:44:57.727 00:44:57.727 Latency(us) 00:44:57.727 [2024-10-07T12:55:21.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:57.727 [2024-10-07T12:55:21.436Z] =================================================================================================================== 00:44:57.727 [2024-10-07T12:55:21.436Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:57.727 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3366604 00:44:58.324 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:44:58.324 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:44:58.324 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:44:58.324 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:44:58.324 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:58.324 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:44:58.324 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:58.324 14:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:58.324 rmmod nvme_tcp 00:44:58.324 rmmod nvme_fabrics 00:44:58.324 rmmod nvme_keyring 00:44:58.585 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:58.585 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:44:58.585 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:44:58.585 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 3366452 ']' 00:44:58.585 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 3366452 00:44:58.585 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3366452 ']' 00:44:58.585 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3366452 00:44:58.585 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:44:58.585 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:58.585 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3366452 00:44:58.585 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:44:58.585 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:44:58.585 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3366452' 00:44:58.585 killing process with pid 3366452 00:44:58.585 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3366452 00:44:58.585 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3366452 00:44:59.155 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:44:59.155 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:44:59.155 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:44:59.155 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:44:59.155 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:44:59.155 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:44:59.155 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:44:59.415 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:59.415 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:59.415 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:59.415 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:44:59.415 14:55:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:01.327 14:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:01.327 00:45:01.327 real 0m23.641s 00:45:01.327 user 0m26.125s 00:45:01.327 sys 0m7.827s 00:45:01.327 14:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:01.327 14:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:45:01.327 ************************************ 00:45:01.327 END TEST nvmf_queue_depth 00:45:01.327 ************************************ 00:45:01.327 14:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:45:01.327 14:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:45:01.327 14:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:01.327 14:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:45:01.327 ************************************ 00:45:01.327 START TEST nvmf_target_multipath 00:45:01.327 ************************************ 00:45:01.327 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:45:01.587 * Looking for test storage... 00:45:01.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:45:01.587 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:01.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:01.588 --rc genhtml_branch_coverage=1 00:45:01.588 --rc genhtml_function_coverage=1 00:45:01.588 --rc genhtml_legend=1 00:45:01.588 --rc geninfo_all_blocks=1 00:45:01.588 --rc geninfo_unexecuted_blocks=1 00:45:01.588 00:45:01.588 ' 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:01.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:01.588 --rc genhtml_branch_coverage=1 00:45:01.588 --rc genhtml_function_coverage=1 00:45:01.588 --rc genhtml_legend=1 00:45:01.588 --rc geninfo_all_blocks=1 00:45:01.588 --rc geninfo_unexecuted_blocks=1 00:45:01.588 00:45:01.588 ' 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:01.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:01.588 --rc genhtml_branch_coverage=1 00:45:01.588 --rc genhtml_function_coverage=1 00:45:01.588 --rc genhtml_legend=1 00:45:01.588 --rc geninfo_all_blocks=1 00:45:01.588 --rc geninfo_unexecuted_blocks=1 00:45:01.588 00:45:01.588 ' 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:01.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:01.588 --rc genhtml_branch_coverage=1 00:45:01.588 --rc genhtml_function_coverage=1 00:45:01.588 --rc genhtml_legend=1 00:45:01.588 --rc geninfo_all_blocks=1 00:45:01.588 --rc geninfo_unexecuted_blocks=1 00:45:01.588 00:45:01.588 ' 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:45:01.588 14:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:45:09.721 Found 0000:31:00.0 (0x8086 - 0x159b) 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:45:09.721 Found 0000:31:00.1 (0x8086 - 0x159b) 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:45:09.721 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:45:09.722 Found net devices under 0000:31:00.0: cvl_0_0 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:45:09.722 Found net devices under 0000:31:00.1: cvl_0_1 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:09.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:09.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:45:09.722 00:45:09.722 --- 10.0.0.2 ping statistics --- 00:45:09.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:09.722 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:09.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:09.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:45:09.722 00:45:09.722 --- 10.0.0.1 ping statistics --- 00:45:09.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:09.722 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:45:09.722 only one NIC for nvmf test 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:09.722 rmmod nvme_tcp 00:45:09.722 rmmod nvme_fabrics 00:45:09.722 rmmod nvme_keyring 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:09.722 14:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:11.106 00:45:11.106 real 0m9.752s 00:45:11.106 user 0m2.128s 00:45:11.106 sys 0m5.529s 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:45:11.106 ************************************ 00:45:11.106 END TEST nvmf_target_multipath 00:45:11.106 ************************************ 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:11.106 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:45:11.367 ************************************ 00:45:11.367 START TEST nvmf_zcopy 00:45:11.367 ************************************ 00:45:11.367 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:45:11.367 * Looking for test storage... 00:45:11.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:11.367 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:11.367 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:45:11.367 14:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:11.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:11.367 --rc genhtml_branch_coverage=1 00:45:11.367 --rc genhtml_function_coverage=1 00:45:11.367 --rc genhtml_legend=1 00:45:11.367 --rc geninfo_all_blocks=1 00:45:11.367 --rc geninfo_unexecuted_blocks=1 00:45:11.367 00:45:11.367 ' 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:11.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:11.367 --rc genhtml_branch_coverage=1 00:45:11.367 --rc genhtml_function_coverage=1 00:45:11.367 --rc genhtml_legend=1 00:45:11.367 --rc geninfo_all_blocks=1 00:45:11.367 --rc geninfo_unexecuted_blocks=1 00:45:11.367 00:45:11.367 ' 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:11.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:11.367 --rc genhtml_branch_coverage=1 00:45:11.367 --rc genhtml_function_coverage=1 00:45:11.367 --rc genhtml_legend=1 00:45:11.367 --rc geninfo_all_blocks=1 00:45:11.367 --rc geninfo_unexecuted_blocks=1 00:45:11.367 00:45:11.367 ' 00:45:11.367 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:11.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:11.368 --rc genhtml_branch_coverage=1 00:45:11.368 --rc genhtml_function_coverage=1 00:45:11.368 --rc genhtml_legend=1 00:45:11.368 --rc geninfo_all_blocks=1 00:45:11.368 --rc geninfo_unexecuted_blocks=1 00:45:11.368 00:45:11.368 ' 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:11.368 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:11.628 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:45:11.628 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:45:11.628 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:45:11.628 14:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:45:19.765 Found 0000:31:00.0 (0x8086 - 0x159b) 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:45:19.765 Found 0000:31:00.1 (0x8086 - 0x159b) 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:45:19.765 14:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:45:19.765 Found net devices under 0000:31:00.0: cvl_0_0 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:45:19.765 Found net devices under 0000:31:00.1: cvl_0_1 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:19.765 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:19.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:19.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:45:19.766 00:45:19.766 --- 10.0.0.2 ping statistics --- 00:45:19.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:19.766 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:19.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:19.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:45:19.766 00:45:19.766 --- 10.0.0.1 ping statistics --- 00:45:19.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:19.766 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=3377399 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 3377399 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3377399 ']' 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:19.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:19.766 14:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:19.766 [2024-10-07 14:55:42.452642] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:45:19.766 [2024-10-07 14:55:42.455328] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:45:19.766 [2024-10-07 14:55:42.455426] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:19.766 [2024-10-07 14:55:42.613692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:19.766 [2024-10-07 14:55:42.838259] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:19.766 [2024-10-07 14:55:42.838307] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:19.766 [2024-10-07 14:55:42.838321] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:19.766 [2024-10-07 14:55:42.838334] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:19.766 [2024-10-07 14:55:42.838345] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:19.766 [2024-10-07 14:55:42.839568] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:45:19.766 [2024-10-07 14:55:43.077205] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:45:19.766 [2024-10-07 14:55:43.077507] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:19.766 [2024-10-07 14:55:43.256694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:19.766 [2024-10-07 14:55:43.305096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:19.766 malloc0 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:19.766 { 00:45:19.766 "params": { 00:45:19.766 "name": "Nvme$subsystem", 00:45:19.766 "trtype": "$TEST_TRANSPORT", 00:45:19.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:19.766 "adrfam": "ipv4", 00:45:19.766 "trsvcid": "$NVMF_PORT", 00:45:19.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:19.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:19.766 "hdgst": ${hdgst:-false}, 00:45:19.766 "ddgst": ${ddgst:-false} 00:45:19.766 }, 00:45:19.766 "method": "bdev_nvme_attach_controller" 00:45:19.766 } 00:45:19.766 EOF 00:45:19.766 )") 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:45:19.766 14:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:45:19.766 "params": { 00:45:19.766 "name": "Nvme1", 00:45:19.766 "trtype": "tcp", 00:45:19.766 "traddr": "10.0.0.2", 00:45:19.766 "adrfam": "ipv4", 00:45:19.766 "trsvcid": "4420", 00:45:19.766 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:19.766 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:19.766 "hdgst": false, 00:45:19.766 "ddgst": false 00:45:19.766 }, 00:45:19.766 "method": "bdev_nvme_attach_controller" 00:45:19.766 }' 00:45:20.026 [2024-10-07 14:55:43.476380] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:45:20.026 [2024-10-07 14:55:43.476485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377592 ] 00:45:20.026 [2024-10-07 14:55:43.589978] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:20.286 [2024-10-07 14:55:43.767237] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:45:20.545 Running I/O for 10 seconds... 00:45:22.427 5841.00 IOPS, 45.63 MiB/s [2024-10-07T12:55:47.520Z] 5888.50 IOPS, 46.00 MiB/s [2024-10-07T12:55:48.461Z] 5901.00 IOPS, 46.10 MiB/s [2024-10-07T12:55:49.405Z] 5909.25 IOPS, 46.17 MiB/s [2024-10-07T12:55:50.347Z] 6273.60 IOPS, 49.01 MiB/s [2024-10-07T12:55:51.290Z] 6646.17 IOPS, 51.92 MiB/s [2024-10-07T12:55:52.233Z] 6920.14 IOPS, 54.06 MiB/s [2024-10-07T12:55:53.177Z] 7124.75 IOPS, 55.66 MiB/s [2024-10-07T12:55:54.562Z] 7285.56 IOPS, 56.92 MiB/s [2024-10-07T12:55:54.562Z] 7411.80 IOPS, 57.90 MiB/s 00:45:30.853 Latency(us) 00:45:30.853 [2024-10-07T12:55:54.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:30.853 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:45:30.853 Verification LBA range: start 0x0 length 0x1000 00:45:30.853 Nvme1n1 : 10.01 7416.28 57.94 0.00 0.00 17202.49 1508.69 30801.92 00:45:30.853 [2024-10-07T12:55:54.562Z] =================================================================================================================== 00:45:30.853 [2024-10-07T12:55:54.562Z] Total : 7416.28 57.94 0.00 0.00 17202.49 1508.69 30801.92 00:45:31.424 14:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3379759 00:45:31.424 14:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:45:31.424 14:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:31.424 14:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:45:31.424 14:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:45:31.424 14:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:45:31.424 14:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:45:31.424 14:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:45:31.424 14:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:45:31.424 { 00:45:31.424 "params": { 00:45:31.424 "name": "Nvme$subsystem", 00:45:31.424 "trtype": "$TEST_TRANSPORT", 00:45:31.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:31.424 "adrfam": "ipv4", 00:45:31.424 "trsvcid": "$NVMF_PORT", 00:45:31.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:31.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:31.424 "hdgst": ${hdgst:-false}, 00:45:31.424 "ddgst": ${ddgst:-false} 00:45:31.424 }, 00:45:31.424 "method": "bdev_nvme_attach_controller" 00:45:31.424 } 00:45:31.424 EOF 00:45:31.424 )") 00:45:31.424 14:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:45:31.424 [2024-10-07 14:55:54.888196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.424 [2024-10-07 14:55:54.888235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.424 14:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:45:31.424 14:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:45:31.424 14:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:45:31.424 "params": { 00:45:31.424 "name": "Nvme1", 00:45:31.424 "trtype": "tcp", 00:45:31.424 "traddr": "10.0.0.2", 00:45:31.424 "adrfam": "ipv4", 00:45:31.424 "trsvcid": "4420", 00:45:31.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:31.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:31.424 "hdgst": false, 00:45:31.424 "ddgst": false 00:45:31.424 }, 00:45:31.424 "method": "bdev_nvme_attach_controller" 00:45:31.424 }' 00:45:31.424 [2024-10-07 14:55:54.900132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.424 [2024-10-07 14:55:54.900151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.424 [2024-10-07 14:55:54.912143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.424 [2024-10-07 14:55:54.912159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.424 [2024-10-07 14:55:54.924137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.424 [2024-10-07 14:55:54.924153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.424 [2024-10-07 14:55:54.936119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.424 [2024-10-07 14:55:54.936134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.424 [2024-10-07 14:55:54.948134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.424 [2024-10-07 14:55:54.948150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.424 [2024-10-07 14:55:54.956680] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:45:31.424 [2024-10-07 14:55:54.956776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3379759 ] 00:45:31.424 [2024-10-07 14:55:54.960133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.424 [2024-10-07 14:55:54.960148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.424 [2024-10-07 14:55:54.972132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.424 [2024-10-07 14:55:54.972148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.424 [2024-10-07 14:55:54.984131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.424 [2024-10-07 14:55:54.984146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.424 [2024-10-07 14:55:54.996128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.424 [2024-10-07 14:55:54.996142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.424 [2024-10-07 14:55:55.008135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.424 [2024-10-07 14:55:55.008151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.424 [2024-10-07 14:55:55.020133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.424 [2024-10-07 14:55:55.020148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.424 [2024-10-07 14:55:55.032118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.424 [2024-10-07 14:55:55.032133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.424 [2024-10-07 14:55:55.044132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.424 [2024-10-07 14:55:55.044147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.424 [2024-10-07 14:55:55.056129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.424 [2024-10-07 14:55:55.056144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.424 [2024-10-07 14:55:55.068115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.424 [2024-10-07 14:55:55.068130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.424 [2024-10-07 14:55:55.070320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:31.424 [2024-10-07 14:55:55.080130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.424 [2024-10-07 14:55:55.080146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.424 [2024-10-07 14:55:55.092120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.424 [2024-10-07 14:55:55.092135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.425 [2024-10-07 14:55:55.104133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.425 [2024-10-07 14:55:55.104149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.425 [2024-10-07 14:55:55.116140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.425 [2024-10-07 14:55:55.116156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.425 [2024-10-07 14:55:55.128119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.425 [2024-10-07 14:55:55.128134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.140129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.140144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.152129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.152144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.164120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.164136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.176133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.176148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.188118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.188135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.200133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.200149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.212129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.212145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.224119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.224134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.236131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.236146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.248133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.248152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.248253] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:45:31.686 [2024-10-07 14:55:55.260117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.260133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.272129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.272144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.284114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.284129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.296132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.296147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.308130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.308146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.320121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.320137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.332130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.332144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.344132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.344147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.356118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.356132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.368127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.368142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.380115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.380130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.686 [2024-10-07 14:55:55.392129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.686 [2024-10-07 14:55:55.392144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.947 [2024-10-07 14:55:55.404133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.947 [2024-10-07 14:55:55.404148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.947 [2024-10-07 14:55:55.416120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.947 [2024-10-07 14:55:55.416135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.947 [2024-10-07 14:55:55.428130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.947 [2024-10-07 14:55:55.428144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.947 [2024-10-07 14:55:55.440129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.947 [2024-10-07 14:55:55.440144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.947 [2024-10-07 14:55:55.452126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.947 [2024-10-07 14:55:55.452141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.947 [2024-10-07 14:55:55.464130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.947 [2024-10-07 14:55:55.464145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.947 [2024-10-07 14:55:55.476117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.947 [2024-10-07 14:55:55.476132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.947 [2024-10-07 14:55:55.488132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.947 [2024-10-07 14:55:55.488146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.947 [2024-10-07 14:55:55.500134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.947 [2024-10-07 14:55:55.500152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.947 [2024-10-07 14:55:55.512119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.947 [2024-10-07 14:55:55.512135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.947 [2024-10-07 14:55:55.524131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.947 [2024-10-07 14:55:55.524147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.947 [2024-10-07 14:55:55.536137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.947 [2024-10-07 14:55:55.536153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.947 [2024-10-07 14:55:55.548119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.947 [2024-10-07 14:55:55.548134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.947 [2024-10-07 14:55:55.560130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.947 [2024-10-07 14:55:55.560145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.947 [2024-10-07 14:55:55.572118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.948 [2024-10-07 14:55:55.572133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.948 [2024-10-07 14:55:55.584132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.948 [2024-10-07 14:55:55.584147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.948 [2024-10-07 14:55:55.596141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.948 [2024-10-07 14:55:55.596161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.948 Running I/O for 5 seconds... 00:45:31.948 [2024-10-07 14:55:55.608130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.948 [2024-10-07 14:55:55.608147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.948 [2024-10-07 14:55:55.623854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.948 [2024-10-07 14:55:55.623874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.948 [2024-10-07 14:55:55.637423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.948 [2024-10-07 14:55:55.637448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:31.948 [2024-10-07 14:55:55.652073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:31.948 [2024-10-07 14:55:55.652094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.663038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.663059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.677451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.677470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.692548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.692566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.708166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.708188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.719274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.719294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.733864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.733884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.747727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.747747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.760627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.760646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.775682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.775701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.788654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.788673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.804328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.804347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.819888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.819908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.832804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.832822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.847626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.847645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.860018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.860038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.873673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.873692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.888657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.888676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.904160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.904184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.209 [2024-10-07 14:55:55.916499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.209 [2024-10-07 14:55:55.916517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.471 [2024-10-07 14:55:55.931926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.471 [2024-10-07 14:55:55.931946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.471 [2024-10-07 14:55:55.946124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.471 [2024-10-07 14:55:55.946144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.471 [2024-10-07 14:55:55.960478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.471 [2024-10-07 14:55:55.960498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.471 [2024-10-07 14:55:55.975732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.471 [2024-10-07 14:55:55.975752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.471 [2024-10-07 14:55:55.988679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.471 [2024-10-07 14:55:55.988698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.471 [2024-10-07 14:55:56.003537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.471 [2024-10-07 14:55:56.003557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.471 [2024-10-07 14:55:56.017892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.471 [2024-10-07 14:55:56.017912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.471 [2024-10-07 14:55:56.032079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.471 [2024-10-07 14:55:56.032098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.471 [2024-10-07 14:55:56.043360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.471 [2024-10-07 14:55:56.043379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.471 [2024-10-07 14:55:56.057357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.471 [2024-10-07 14:55:56.057376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.471 [2024-10-07 14:55:56.071466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.471 [2024-10-07 14:55:56.071485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.471 [2024-10-07 14:55:56.084917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.471 [2024-10-07 14:55:56.084936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.471 [2024-10-07 14:55:56.100192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.471 [2024-10-07 14:55:56.100211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.471 [2024-10-07 14:55:56.111206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.471 [2024-10-07 14:55:56.111224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.471 [2024-10-07 14:55:56.125549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.471 [2024-10-07 14:55:56.125569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.471 [2024-10-07 14:55:56.139367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.471 [2024-10-07 14:55:56.139386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.471 [2024-10-07 14:55:56.153021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.471 [2024-10-07 14:55:56.153042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.471 [2024-10-07 14:55:56.168399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.471 [2024-10-07 14:55:56.168422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.183912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.183932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.194768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.194787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.209394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.209413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.223747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.223766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.236386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.236405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.249185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.249204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.263821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.263841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.277155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.277174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.291856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.291876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.305866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.305886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.319817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.319836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.332286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.332305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.347842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.347861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.360725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.360744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.376443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.376461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.392277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.392296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.408204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.408223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.422094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.422113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.733 [2024-10-07 14:55:56.436705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.733 [2024-10-07 14:55:56.436728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.993 [2024-10-07 14:55:56.452209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.993 [2024-10-07 14:55:56.452229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.993 [2024-10-07 14:55:56.463440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.993 [2024-10-07 14:55:56.463459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.993 [2024-10-07 14:55:56.478007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.993 [2024-10-07 14:55:56.478027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.994 [2024-10-07 14:55:56.492247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.994 [2024-10-07 14:55:56.492267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.994 [2024-10-07 14:55:56.502919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.994 [2024-10-07 14:55:56.502938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.994 [2024-10-07 14:55:56.517145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.994 [2024-10-07 14:55:56.517165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.994 [2024-10-07 14:55:56.532306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.994 [2024-10-07 14:55:56.532325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.994 [2024-10-07 14:55:56.543198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.994 [2024-10-07 14:55:56.543217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.994 [2024-10-07 14:55:56.556991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.994 [2024-10-07 14:55:56.557016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.994 [2024-10-07 14:55:56.572181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.994 [2024-10-07 14:55:56.572199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.994 [2024-10-07 14:55:56.582887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.994 [2024-10-07 14:55:56.582905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.994 [2024-10-07 14:55:56.597415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.994 [2024-10-07 14:55:56.597433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.994 [2024-10-07 14:55:56.611649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.994 [2024-10-07 14:55:56.611668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.994 16732.00 IOPS, 130.72 MiB/s [2024-10-07T12:55:56.703Z] [2024-10-07 14:55:56.624863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.994 [2024-10-07 14:55:56.624882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.994 [2024-10-07 14:55:56.640261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.994 [2024-10-07 14:55:56.640279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.994 [2024-10-07 14:55:56.656354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.994 [2024-10-07 14:55:56.656373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.994 [2024-10-07 14:55:56.667442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.994 [2024-10-07 14:55:56.667461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.994 [2024-10-07 14:55:56.681272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.994 [2024-10-07 14:55:56.681290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:32.994 [2024-10-07 14:55:56.695649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:32.994 [2024-10-07 14:55:56.695668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.254 [2024-10-07 14:55:56.707177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.254 [2024-10-07 14:55:56.707197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.254 [2024-10-07 14:55:56.721421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.254 [2024-10-07 14:55:56.721439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.254 [2024-10-07 14:55:56.736072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.254 [2024-10-07 14:55:56.736090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.254 [2024-10-07 14:55:56.746802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.254 [2024-10-07 14:55:56.746821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.254 [2024-10-07 14:55:56.760804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.254 [2024-10-07 14:55:56.760823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.254 [2024-10-07 14:55:56.775853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.254 [2024-10-07 14:55:56.775872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.254 [2024-10-07 14:55:56.789443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.255 [2024-10-07 14:55:56.789461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.255 [2024-10-07 14:55:56.804679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.255 [2024-10-07 14:55:56.804698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.255 [2024-10-07 14:55:56.820118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.255 [2024-10-07 14:55:56.820137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.255 [2024-10-07 14:55:56.832439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.255 [2024-10-07 14:55:56.832458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.255 [2024-10-07 14:55:56.848440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.255 [2024-10-07 14:55:56.848459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.255 [2024-10-07 14:55:56.863539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.255 [2024-10-07 14:55:56.863558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.255 [2024-10-07 14:55:56.875840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.255 [2024-10-07 14:55:56.875858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.255 [2024-10-07 14:55:56.889532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.255 [2024-10-07 14:55:56.889551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.255 [2024-10-07 14:55:56.904503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.255 [2024-10-07 14:55:56.904521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.255 [2024-10-07 14:55:56.920147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.255 [2024-10-07 14:55:56.920167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.255 [2024-10-07 14:55:56.933549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.255 [2024-10-07 14:55:56.933568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.255 [2024-10-07 14:55:56.947675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.255 [2024-10-07 14:55:56.947693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.255 [2024-10-07 14:55:56.960533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.255 [2024-10-07 14:55:56.960552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.515 [2024-10-07 14:55:56.975601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.515 [2024-10-07 14:55:56.975620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.515 [2024-10-07 14:55:56.989827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.515 [2024-10-07 14:55:56.989846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.515 [2024-10-07 14:55:57.004330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.515 [2024-10-07 14:55:57.004350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.515 [2024-10-07 14:55:57.019790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.515 [2024-10-07 14:55:57.019809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.515 [2024-10-07 14:55:57.033121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.515 [2024-10-07 14:55:57.033139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.515 [2024-10-07 14:55:57.047614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.515 [2024-10-07 14:55:57.047633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.515 [2024-10-07 14:55:57.061993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.515 [2024-10-07 14:55:57.062018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.515 [2024-10-07 14:55:57.076318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.515 [2024-10-07 14:55:57.076337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.515 [2024-10-07 14:55:57.088711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.515 [2024-10-07 14:55:57.088729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.516 [2024-10-07 14:55:57.104138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.516 [2024-10-07 14:55:57.104156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.516 [2024-10-07 14:55:57.116366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.516 [2024-10-07 14:55:57.116384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.516 [2024-10-07 14:55:57.131671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.516 [2024-10-07 14:55:57.131690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.516 [2024-10-07 14:55:57.144103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.516 [2024-10-07 14:55:57.144122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.516 [2024-10-07 14:55:57.156742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.516 [2024-10-07 14:55:57.156761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.516 [2024-10-07 14:55:57.172032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.516 [2024-10-07 14:55:57.172052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.516 [2024-10-07 14:55:57.185715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.516 [2024-10-07 14:55:57.185734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.516 [2024-10-07 14:55:57.199964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.516 [2024-10-07 14:55:57.199983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.516 [2024-10-07 14:55:57.211635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.516 [2024-10-07 14:55:57.211654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.776 [2024-10-07 14:55:57.225230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.776 [2024-10-07 14:55:57.225249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.776 [2024-10-07 14:55:57.240019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.776 [2024-10-07 14:55:57.240037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.776 [2024-10-07 14:55:57.253808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.776 [2024-10-07 14:55:57.253826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.776 [2024-10-07 14:55:57.267986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.776 [2024-10-07 14:55:57.268011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.776 [2024-10-07 14:55:57.280326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.776 [2024-10-07 14:55:57.280345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.776 [2024-10-07 14:55:57.292649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.776 [2024-10-07 14:55:57.292668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.776 [2024-10-07 14:55:57.308318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.776 [2024-10-07 14:55:57.308336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.776 [2024-10-07 14:55:57.323681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.776 [2024-10-07 14:55:57.323700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.776 [2024-10-07 14:55:57.334918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.776 [2024-10-07 14:55:57.334936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.776 [2024-10-07 14:55:57.349571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.776 [2024-10-07 14:55:57.349590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.776 [2024-10-07 14:55:57.363823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.776 [2024-10-07 14:55:57.363841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.777 [2024-10-07 14:55:57.377620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.777 [2024-10-07 14:55:57.377639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.777 [2024-10-07 14:55:57.391946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.777 [2024-10-07 14:55:57.391965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.777 [2024-10-07 14:55:57.403748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.777 [2024-10-07 14:55:57.403766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.777 [2024-10-07 14:55:57.416925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.777 [2024-10-07 14:55:57.416944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.777 [2024-10-07 14:55:57.431868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.777 [2024-10-07 14:55:57.431887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.777 [2024-10-07 14:55:57.445332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.777 [2024-10-07 14:55:57.445350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.777 [2024-10-07 14:55:57.459603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.777 [2024-10-07 14:55:57.459621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:33.777 [2024-10-07 14:55:57.473016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:33.777 [2024-10-07 14:55:57.473040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 [2024-10-07 14:55:57.488342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.488360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 [2024-10-07 14:55:57.504055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.504074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 [2024-10-07 14:55:57.514829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.514849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 [2024-10-07 14:55:57.529285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.529304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 [2024-10-07 14:55:57.544159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.544178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 [2024-10-07 14:55:57.557571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.557591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 [2024-10-07 14:55:57.572006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.572026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 [2024-10-07 14:55:57.584328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.584346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 [2024-10-07 14:55:57.599606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.599625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 [2024-10-07 14:55:57.612917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.612936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 16795.00 IOPS, 131.21 MiB/s [2024-10-07T12:55:57.747Z] [2024-10-07 14:55:57.628545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.628564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 [2024-10-07 14:55:57.644092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.644112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 [2024-10-07 14:55:57.656042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.656061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 [2024-10-07 14:55:57.668900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.668919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 [2024-10-07 14:55:57.684064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.684083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 [2024-10-07 14:55:57.696991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.697016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 [2024-10-07 14:55:57.711776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.711795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 [2024-10-07 14:55:57.725086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.725105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.038 [2024-10-07 14:55:57.740810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.038 [2024-10-07 14:55:57.740833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.300 [2024-10-07 14:55:57.756301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.300 [2024-10-07 14:55:57.756321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.300 [2024-10-07 14:55:57.771775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.300 [2024-10-07 14:55:57.771794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.300 [2024-10-07 14:55:57.785716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.300 [2024-10-07 14:55:57.785734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.300 [2024-10-07 14:55:57.799823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.300 [2024-10-07 14:55:57.799842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.300 [2024-10-07 14:55:57.811421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.300 [2024-10-07 14:55:57.811441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.300 [2024-10-07 14:55:57.825007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.300 [2024-10-07 14:55:57.825027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.300 [2024-10-07 14:55:57.840318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.300 [2024-10-07 14:55:57.840338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.300 [2024-10-07 14:55:57.856176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.300 [2024-10-07 14:55:57.856195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.300 [2024-10-07 14:55:57.868791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.300 [2024-10-07 14:55:57.868810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.300 [2024-10-07 14:55:57.884439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.300 [2024-10-07 14:55:57.884458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.300 [2024-10-07 14:55:57.900244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.300 [2024-10-07 14:55:57.900264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.300 [2024-10-07 14:55:57.912614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.300 [2024-10-07 14:55:57.912632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.300 [2024-10-07 14:55:57.928401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.300 [2024-10-07 14:55:57.928420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.300 [2024-10-07 14:55:57.943971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.300 [2024-10-07 14:55:57.943991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.300 [2024-10-07 14:55:57.955071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.300 [2024-10-07 14:55:57.955090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.300 [2024-10-07 14:55:57.969973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.300 [2024-10-07 14:55:57.969992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.300 [2024-10-07 14:55:57.983976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.300 [2024-10-07 14:55:57.983995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.300 [2024-10-07 14:55:57.995921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.300 [2024-10-07 14:55:57.995941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.561 [2024-10-07 14:55:58.009597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.561 [2024-10-07 14:55:58.009622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.561 [2024-10-07 14:55:58.024098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.561 [2024-10-07 14:55:58.024118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.561 [2024-10-07 14:55:58.036743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.561 [2024-10-07 14:55:58.036763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.561 [2024-10-07 14:55:58.051417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.561 [2024-10-07 14:55:58.051436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.561 [2024-10-07 14:55:58.064480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.562 [2024-10-07 14:55:58.064499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.562 [2024-10-07 14:55:58.080246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.562 [2024-10-07 14:55:58.080266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.562 [2024-10-07 14:55:58.091783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.562 [2024-10-07 14:55:58.091802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.562 [2024-10-07 14:55:58.105125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.562 [2024-10-07 14:55:58.105144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.562 [2024-10-07 14:55:58.120853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.562 [2024-10-07 14:55:58.120872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.562 [2024-10-07 14:55:58.135778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.562 [2024-10-07 14:55:58.135797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.562 [2024-10-07 14:55:58.146913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.562 [2024-10-07 14:55:58.146931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.562 [2024-10-07 14:55:58.161357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.562 [2024-10-07 14:55:58.161376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.562 [2024-10-07 14:55:58.175637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.562 [2024-10-07 14:55:58.175656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.562 [2024-10-07 14:55:58.188653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.562 [2024-10-07 14:55:58.188672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.562 [2024-10-07 14:55:58.204459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.562 [2024-10-07 14:55:58.204477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.562 [2024-10-07 14:55:58.219502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.562 [2024-10-07 14:55:58.219521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.562 [2024-10-07 14:55:58.232454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.562 [2024-10-07 14:55:58.232472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.562 [2024-10-07 14:55:58.247994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.562 [2024-10-07 14:55:58.248019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.562 [2024-10-07 14:55:58.260518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.562 [2024-10-07 14:55:58.260536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.276106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.276125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.288788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.288807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.304405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.304424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.320083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.320102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.330807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.330825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.345608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.345627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.359661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.359679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.371667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.371686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.386158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.386177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.400128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.400146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.412027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.412046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.425264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.425283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.439535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.439553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.452790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.452809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.467889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.467907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.479568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.479586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.492710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.492729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.508344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.508361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:34.822 [2024-10-07 14:55:58.523899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:34.822 [2024-10-07 14:55:58.523919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 [2024-10-07 14:55:58.536215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.536234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 [2024-10-07 14:55:58.549297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.549316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 [2024-10-07 14:55:58.564354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.564372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 [2024-10-07 14:55:58.576218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.576237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 [2024-10-07 14:55:58.589283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.589302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 [2024-10-07 14:55:58.604061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.604080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 [2024-10-07 14:55:58.615760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.615779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 16809.67 IOPS, 131.33 MiB/s [2024-10-07T12:55:58.792Z] [2024-10-07 14:55:58.629621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.629640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 [2024-10-07 14:55:58.643972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.643991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 [2024-10-07 14:55:58.656320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.656339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 [2024-10-07 14:55:58.672146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.672165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 [2024-10-07 14:55:58.684313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.684332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 [2024-10-07 14:55:58.696740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.696759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 [2024-10-07 14:55:58.712172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.712190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 [2024-10-07 14:55:58.723464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.723484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 [2024-10-07 14:55:58.737553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.737572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 [2024-10-07 14:55:58.752204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.752222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 [2024-10-07 14:55:58.762885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.762903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.083 [2024-10-07 14:55:58.777025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.083 [2024-10-07 14:55:58.777048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.344 [2024-10-07 14:55:58.792335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.344 [2024-10-07 14:55:58.792354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:58.805925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:58.805945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:58.820045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:58.820064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:58.830532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:58.830550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:58.845423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:58.845442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:58.860196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:58.860215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:58.873621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:58.873639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:58.888355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:58.888374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:58.900100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:58.900118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:58.913236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:58.913256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:58.927563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:58.927582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:58.941330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:58.941349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:58.957113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:58.957132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:58.971994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:58.972019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:58.985475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:58.985493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:58.999188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:58.999206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:59.012233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:59.012250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:59.027470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:59.027488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:59.040255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:59.040278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.345 [2024-10-07 14:55:59.051732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.345 [2024-10-07 14:55:59.051751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.606 [2024-10-07 14:55:59.065370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.606 [2024-10-07 14:55:59.065390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.606 [2024-10-07 14:55:59.080094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.606 [2024-10-07 14:55:59.080113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.606 [2024-10-07 14:55:59.090939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.606 [2024-10-07 14:55:59.090957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.606 [2024-10-07 14:55:59.105782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.606 [2024-10-07 14:55:59.105800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.606 [2024-10-07 14:55:59.119941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.606 [2024-10-07 14:55:59.119959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.607 [2024-10-07 14:55:59.131044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.607 [2024-10-07 14:55:59.131063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.607 [2024-10-07 14:55:59.145740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.607 [2024-10-07 14:55:59.145760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.607 [2024-10-07 14:55:59.159791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.607 [2024-10-07 14:55:59.159811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.607 [2024-10-07 14:55:59.173766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.607 [2024-10-07 14:55:59.173785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.607 [2024-10-07 14:55:59.187957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.607 [2024-10-07 14:55:59.187976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.607 [2024-10-07 14:55:59.201318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.607 [2024-10-07 14:55:59.201336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.607 [2024-10-07 14:55:59.215781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.607 [2024-10-07 14:55:59.215800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.607 [2024-10-07 14:55:59.226873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.607 [2024-10-07 14:55:59.226892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.607 [2024-10-07 14:55:59.241556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.607 [2024-10-07 14:55:59.241574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.607 [2024-10-07 14:55:59.255916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.607 [2024-10-07 14:55:59.255934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.607 [2024-10-07 14:55:59.268100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.607 [2024-10-07 14:55:59.268119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.607 [2024-10-07 14:55:59.281326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.607 [2024-10-07 14:55:59.281344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.607 [2024-10-07 14:55:59.295963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.607 [2024-10-07 14:55:59.295986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.607 [2024-10-07 14:55:59.306832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.607 [2024-10-07 14:55:59.306850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.321597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.321616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.335606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.335625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.349527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.349546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.364063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.364083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.374969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.374988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.389428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.389447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.403884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.403904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.414997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.415023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.428849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.428868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.444349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.444368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.455302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.455323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.469211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.469231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.483388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.483407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.497430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.497449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.511869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.511888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.523272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.523291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.537142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.537160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.551239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.551262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:35.869 [2024-10-07 14:55:59.565320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:35.869 [2024-10-07 14:55:59.565340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 [2024-10-07 14:55:59.579805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.579824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 [2024-10-07 14:55:59.593675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.593694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 [2024-10-07 14:55:59.608240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.608259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 16822.75 IOPS, 131.43 MiB/s [2024-10-07T12:55:59.839Z] [2024-10-07 14:55:59.620636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.620654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 [2024-10-07 14:55:59.635711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.635730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 [2024-10-07 14:55:59.650021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.650040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 [2024-10-07 14:55:59.664232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.664251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 [2024-10-07 14:55:59.676345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.676365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 [2024-10-07 14:55:59.692084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.692104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 [2024-10-07 14:55:59.704975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.704995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 [2024-10-07 14:55:59.719488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.719507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 [2024-10-07 14:55:59.732162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.732183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 [2024-10-07 14:55:59.745606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.745625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 [2024-10-07 14:55:59.759958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.759978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 [2024-10-07 14:55:59.772916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.772935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 [2024-10-07 14:55:59.788077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.788097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 [2024-10-07 14:55:59.799765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.799783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 [2024-10-07 14:55:59.813789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.813808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.130 [2024-10-07 14:55:59.828153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.130 [2024-10-07 14:55:59.828172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.391 [2024-10-07 14:55:59.840399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.391 [2024-10-07 14:55:59.840418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.391 [2024-10-07 14:55:59.856167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.391 [2024-10-07 14:55:59.856186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.391 [2024-10-07 14:55:59.868183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.391 [2024-10-07 14:55:59.868202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.392 [2024-10-07 14:55:59.880184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.392 [2024-10-07 14:55:59.880203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.392 [2024-10-07 14:55:59.892875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.392 [2024-10-07 14:55:59.892894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.392 [2024-10-07 14:55:59.907723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.392 [2024-10-07 14:55:59.907742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.392 [2024-10-07 14:55:59.921256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.392 [2024-10-07 14:55:59.921274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.392 [2024-10-07 14:55:59.935580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.392 [2024-10-07 14:55:59.935599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.392 [2024-10-07 14:55:59.949237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.392 [2024-10-07 14:55:59.949256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.392 [2024-10-07 14:55:59.963931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.392 [2024-10-07 14:55:59.963951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.392 [2024-10-07 14:55:59.974449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.392 [2024-10-07 14:55:59.974468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.392 [2024-10-07 14:55:59.988940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.392 [2024-10-07 14:55:59.988959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.392 [2024-10-07 14:56:00.005393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.392 [2024-10-07 14:56:00.005414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.392 [2024-10-07 14:56:00.020261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.392 [2024-10-07 14:56:00.020285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.392 [2024-10-07 14:56:00.031988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.392 [2024-10-07 14:56:00.032014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.392 [2024-10-07 14:56:00.045088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.392 [2024-10-07 14:56:00.045109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.392 [2024-10-07 14:56:00.059563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.392 [2024-10-07 14:56:00.059584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.392 [2024-10-07 14:56:00.072925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.392 [2024-10-07 14:56:00.072945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.392 [2024-10-07 14:56:00.088501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.392 [2024-10-07 14:56:00.088523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.103687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.103708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.117655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.117674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.131908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.131928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.143235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.143253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.157985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.158010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.173045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.173064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.188114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.188133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.199449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.199467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.213200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.213218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.228514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.228533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.244344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.244363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.256212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.256231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.269383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.269401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.284529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.284546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.300241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.300260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.313785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.313804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.328441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.328460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.343998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.344023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.653 [2024-10-07 14:56:00.356443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.653 [2024-10-07 14:56:00.356461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.914 [2024-10-07 14:56:00.372431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.914 [2024-10-07 14:56:00.372449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.914 [2024-10-07 14:56:00.388170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.914 [2024-10-07 14:56:00.388189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.914 [2024-10-07 14:56:00.401031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.914 [2024-10-07 14:56:00.401050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.914 [2024-10-07 14:56:00.415819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.914 [2024-10-07 14:56:00.415837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.914 [2024-10-07 14:56:00.429753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.914 [2024-10-07 14:56:00.429772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.914 [2024-10-07 14:56:00.443855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.914 [2024-10-07 14:56:00.443875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.914 [2024-10-07 14:56:00.456405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.914 [2024-10-07 14:56:00.456423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.914 [2024-10-07 14:56:00.471703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.914 [2024-10-07 14:56:00.471722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.914 [2024-10-07 14:56:00.484867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.914 [2024-10-07 14:56:00.484886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.914 [2024-10-07 14:56:00.499861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.914 [2024-10-07 14:56:00.499880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.914 [2024-10-07 14:56:00.512964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.914 [2024-10-07 14:56:00.512983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.914 [2024-10-07 14:56:00.527508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.914 [2024-10-07 14:56:00.527526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.914 [2024-10-07 14:56:00.541316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.914 [2024-10-07 14:56:00.541335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.914 [2024-10-07 14:56:00.555989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.914 [2024-10-07 14:56:00.556015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.914 [2024-10-07 14:56:00.567242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.914 [2024-10-07 14:56:00.567260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.914 [2024-10-07 14:56:00.581795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.914 [2024-10-07 14:56:00.581814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.914 [2024-10-07 14:56:00.596145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.914 [2024-10-07 14:56:00.596168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:36.914 [2024-10-07 14:56:00.609129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:36.914 [2024-10-07 14:56:00.609147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 16820.00 IOPS, 131.41 MiB/s [2024-10-07T12:56:00.885Z] [2024-10-07 14:56:00.623220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.623239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 00:45:37.176 Latency(us) 00:45:37.176 [2024-10-07T12:56:00.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:37.176 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:45:37.176 Nvme1n1 : 5.01 16822.51 131.43 0.00 0.00 7600.85 3153.92 12997.97 00:45:37.176 [2024-10-07T12:56:00.885Z] =================================================================================================================== 00:45:37.176 [2024-10-07T12:56:00.885Z] Total : 16822.51 131.43 0.00 0.00 7600.85 3153.92 12997.97 00:45:37.176 [2024-10-07 14:56:00.632123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.632140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.644140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.644156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.656117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.656132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.668163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.668183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.680131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.680145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.692120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.692134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.704131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.704146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.716131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.716145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.728123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.728139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.740141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.740156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.752120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.752135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.764134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.764149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.776140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.776155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.788129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.788147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.800128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.800143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.812131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.812147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.824115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.824130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.836137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.836153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.848167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.848182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.860130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.860145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.872128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.872143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.176 [2024-10-07 14:56:00.884128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.176 [2024-10-07 14:56:00.884145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:00.896139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:00.896154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:00.908131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:00.908146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:00.920116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:00.920131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:00.932138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:00.932154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:00.944117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:00.944132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:00.956133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:00.956148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:00.968131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:00.968146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:00.980121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:00.980135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:00.992129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:00.992144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:01.004143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:01.004158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:01.016118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:01.016138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:01.028129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:01.028144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:01.040116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:01.040131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:01.052131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:01.052146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:01.064133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:01.064148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:01.076131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:01.076147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:01.088129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:01.088145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:01.100129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:01.100144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.436 [2024-10-07 14:56:01.112117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.436 [2024-10-07 14:56:01.112133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.437 [2024-10-07 14:56:01.124130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.437 [2024-10-07 14:56:01.124145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.437 [2024-10-07 14:56:01.136117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.437 [2024-10-07 14:56:01.136134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.697 [2024-10-07 14:56:01.148144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.697 [2024-10-07 14:56:01.148160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.697 [2024-10-07 14:56:01.160129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.697 [2024-10-07 14:56:01.160144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.697 [2024-10-07 14:56:01.172122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.697 [2024-10-07 14:56:01.172137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.697 [2024-10-07 14:56:01.184130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.697 [2024-10-07 14:56:01.184146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.697 [2024-10-07 14:56:01.196136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.697 [2024-10-07 14:56:01.196153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.697 [2024-10-07 14:56:01.208119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.697 [2024-10-07 14:56:01.208134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.697 [2024-10-07 14:56:01.220139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.697 [2024-10-07 14:56:01.220155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.697 [2024-10-07 14:56:01.232120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.697 [2024-10-07 14:56:01.232135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.697 [2024-10-07 14:56:01.244130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.697 [2024-10-07 14:56:01.244145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.697 [2024-10-07 14:56:01.256129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.697 [2024-10-07 14:56:01.256146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.697 [2024-10-07 14:56:01.268115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.697 [2024-10-07 14:56:01.268130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.697 [2024-10-07 14:56:01.280145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.697 [2024-10-07 14:56:01.280160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.697 [2024-10-07 14:56:01.292134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.698 [2024-10-07 14:56:01.292149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.698 [2024-10-07 14:56:01.304119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.698 [2024-10-07 14:56:01.304134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.698 [2024-10-07 14:56:01.316136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.698 [2024-10-07 14:56:01.316152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.698 [2024-10-07 14:56:01.328120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.698 [2024-10-07 14:56:01.328136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.698 [2024-10-07 14:56:01.340132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:45:37.698 [2024-10-07 14:56:01.340148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:37.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3379759) - No such process 00:45:37.698 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3379759 00:45:37.698 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:45:37.698 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:37.698 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:37.698 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:37.698 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:45:37.698 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:37.698 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:37.698 delay0 00:45:37.698 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:37.698 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:45:37.698 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:37.698 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:37.698 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:37.698 14:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:45:37.958 [2024-10-07 14:56:01.551213] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:45:46.091 Initializing NVMe Controllers 00:45:46.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:45:46.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:45:46.091 Initialization complete. Launching workers. 00:45:46.091 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 217, failed: 34975 00:45:46.091 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 35038, failed to submit 154 00:45:46.091 success 34975, unsuccessful 63, failed 0 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:46.091 rmmod nvme_tcp 00:45:46.091 rmmod nvme_fabrics 00:45:46.091 rmmod nvme_keyring 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 3377399 ']' 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 3377399 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3377399 ']' 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3377399 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3377399 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3377399' 00:45:46.091 killing process with pid 3377399 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3377399 00:45:46.091 14:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3377399 00:45:46.091 14:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:45:46.091 14:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:45:46.091 14:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:45:46.091 14:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:45:46.091 14:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:45:46.091 14:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:45:46.091 14:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:45:46.091 14:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:46.091 14:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:46.091 14:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:46.091 14:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:46.091 14:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:48.004 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:48.004 00:45:48.004 real 0m36.825s 00:45:48.004 user 0m48.685s 00:45:48.004 sys 0m12.705s 00:45:48.004 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:48.004 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:45:48.004 ************************************ 00:45:48.004 END TEST nvmf_zcopy 00:45:48.004 ************************************ 00:45:48.004 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:45:48.004 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:45:48.004 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:45:48.004 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:45:48.266 ************************************ 00:45:48.266 START TEST nvmf_nmic 00:45:48.266 ************************************ 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:45:48.266 * Looking for test storage... 00:45:48.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:45:48.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:48.266 --rc genhtml_branch_coverage=1 00:45:48.266 --rc genhtml_function_coverage=1 00:45:48.266 --rc genhtml_legend=1 00:45:48.266 --rc geninfo_all_blocks=1 00:45:48.266 --rc geninfo_unexecuted_blocks=1 00:45:48.266 00:45:48.266 ' 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:45:48.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:48.266 --rc genhtml_branch_coverage=1 00:45:48.266 --rc genhtml_function_coverage=1 00:45:48.266 --rc genhtml_legend=1 00:45:48.266 --rc geninfo_all_blocks=1 00:45:48.266 --rc geninfo_unexecuted_blocks=1 00:45:48.266 00:45:48.266 ' 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:45:48.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:48.266 --rc genhtml_branch_coverage=1 00:45:48.266 --rc genhtml_function_coverage=1 00:45:48.266 --rc genhtml_legend=1 00:45:48.266 --rc geninfo_all_blocks=1 00:45:48.266 --rc geninfo_unexecuted_blocks=1 00:45:48.266 00:45:48.266 ' 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:45:48.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:48.266 --rc genhtml_branch_coverage=1 00:45:48.266 --rc genhtml_function_coverage=1 00:45:48.266 --rc genhtml_legend=1 00:45:48.266 --rc geninfo_all_blocks=1 00:45:48.266 --rc geninfo_unexecuted_blocks=1 00:45:48.266 00:45:48.266 ' 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:48.266 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:48.267 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:45:48.528 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:45:48.528 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:48.528 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:45:48.528 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:45:48.528 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:45:48.528 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:48.528 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:45:48.528 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:48.528 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:45:48.528 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:45:48.528 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:45:48.528 14:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:45:55.111 Found 0000:31:00.0 (0x8086 - 0x159b) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:45:55.111 Found 0000:31:00.1 (0x8086 - 0x159b) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:45:55.111 Found net devices under 0000:31:00.0: cvl_0_0 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:45:55.111 Found net devices under 0000:31:00.1: cvl_0_1 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:45:55.111 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:45:55.112 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:45:55.112 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:45:55.112 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:55.112 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:55.112 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:55.112 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:55.112 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:55.112 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:55.112 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:55.112 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:55.112 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:55.112 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:55.112 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:55.112 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:55.112 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:55.112 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:55.372 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:55.372 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:55.372 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:55.372 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:55.372 14:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:55.372 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:55.372 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:55.372 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:55.372 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:55.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:55.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:45:55.372 00:45:55.372 --- 10.0.0.2 ping statistics --- 00:45:55.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:55.372 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:45:55.372 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:55.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:55.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:45:55.634 00:45:55.634 --- 10.0.0.1 ping statistics --- 00:45:55.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:55.634 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:45:55.634 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:55.634 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:45:55.634 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:45:55.634 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:55.634 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:45:55.634 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:45:55.635 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:55.635 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:45:55.635 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:45:55.635 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:45:55.635 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:45:55.635 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:55.635 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:55.635 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=3386507 00:45:55.635 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 3386507 00:45:55.635 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:45:55.635 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3386507 ']' 00:45:55.635 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:55.635 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:55.635 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:55.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:55.635 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:55.635 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:55.635 [2024-10-07 14:56:19.235296] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:45:55.635 [2024-10-07 14:56:19.237691] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:45:55.635 [2024-10-07 14:56:19.237774] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:55.896 [2024-10-07 14:56:19.370783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:55.896 [2024-10-07 14:56:19.553694] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:55.896 [2024-10-07 14:56:19.553741] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:55.896 [2024-10-07 14:56:19.553754] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:55.896 [2024-10-07 14:56:19.553764] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:55.896 [2024-10-07 14:56:19.553775] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:55.896 [2024-10-07 14:56:19.556046] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:45:55.896 [2024-10-07 14:56:19.556129] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:45:55.896 [2024-10-07 14:56:19.556381] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:45:55.896 [2024-10-07 14:56:19.556405] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:45:56.158 [2024-10-07 14:56:19.809408] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:45:56.158 [2024-10-07 14:56:19.809521] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:45:56.158 [2024-10-07 14:56:19.810739] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:45:56.158 [2024-10-07 14:56:19.810899] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:45:56.158 [2024-10-07 14:56:19.811032] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:45:56.419 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:56.419 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:45:56.419 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:45:56.419 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:56.419 14:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:56.419 [2024-10-07 14:56:20.029223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:56.419 Malloc0 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:56.419 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:56.679 [2024-10-07 14:56:20.129440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:56.679 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:56.679 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:45:56.679 test case1: single bdev can't be used in multiple subsystems 00:45:56.679 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:45:56.679 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:56.679 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:56.679 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:56.679 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:56.679 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:56.679 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:56.679 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:56.679 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:45:56.679 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:45:56.679 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:56.679 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:56.679 [2024-10-07 14:56:20.165109] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:45:56.679 [2024-10-07 14:56:20.165143] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:45:56.679 [2024-10-07 14:56:20.165156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:45:56.679 request: 00:45:56.679 { 00:45:56.679 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:45:56.679 "namespace": { 00:45:56.679 "bdev_name": "Malloc0", 00:45:56.679 "no_auto_visible": false 00:45:56.679 }, 00:45:56.679 "method": "nvmf_subsystem_add_ns", 00:45:56.679 "req_id": 1 00:45:56.679 } 00:45:56.679 Got JSON-RPC error response 00:45:56.679 response: 00:45:56.679 { 00:45:56.679 "code": -32602, 00:45:56.680 "message": "Invalid parameters" 00:45:56.680 } 00:45:56.680 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:45:56.680 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:45:56.680 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:45:56.680 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:45:56.680 Adding namespace failed - expected result. 00:45:56.680 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:45:56.680 test case2: host connect to nvmf target in multiple paths 00:45:56.680 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:45:56.680 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:56.680 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:45:56.680 [2024-10-07 14:56:20.177269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:45:56.680 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:56.680 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:45:57.252 14:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:45:57.513 14:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:45:57.513 14:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:45:57.513 14:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:45:57.513 14:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:45:57.513 14:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:46:00.059 14:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:46:00.059 14:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:46:00.059 14:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:46:00.059 14:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:46:00.059 14:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:46:00.059 14:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:46:00.059 14:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:46:00.059 [global] 00:46:00.059 thread=1 00:46:00.059 invalidate=1 00:46:00.059 rw=write 00:46:00.059 time_based=1 00:46:00.059 runtime=1 00:46:00.059 ioengine=libaio 00:46:00.059 direct=1 00:46:00.059 bs=4096 00:46:00.059 iodepth=1 00:46:00.059 norandommap=0 00:46:00.059 numjobs=1 00:46:00.059 00:46:00.059 verify_dump=1 00:46:00.059 verify_backlog=512 00:46:00.059 verify_state_save=0 00:46:00.059 do_verify=1 00:46:00.059 verify=crc32c-intel 00:46:00.059 [job0] 00:46:00.059 filename=/dev/nvme0n1 00:46:00.059 Could not set queue depth (nvme0n1) 00:46:00.059 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:00.059 fio-3.35 00:46:00.059 Starting 1 thread 00:46:01.002 00:46:01.002 job0: (groupid=0, jobs=1): err= 0: pid=3387615: Mon Oct 7 14:56:24 2024 00:46:01.002 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:46:01.002 slat (nsec): min=24373, max=56843, avg=25271.20, stdev=3154.90 00:46:01.002 clat (usec): min=761, max=1330, avg=1160.90, stdev=100.88 00:46:01.002 lat (usec): min=785, max=1361, avg=1186.17, stdev=101.09 00:46:01.002 clat percentiles (usec): 00:46:01.002 | 1.00th=[ 873], 5.00th=[ 922], 10.00th=[ 1004], 20.00th=[ 1106], 00:46:01.002 | 30.00th=[ 1156], 40.00th=[ 1172], 50.00th=[ 1188], 60.00th=[ 1205], 00:46:01.002 | 70.00th=[ 1221], 80.00th=[ 1237], 90.00th=[ 1254], 95.00th=[ 1270], 00:46:01.002 | 99.00th=[ 1303], 99.50th=[ 1303], 99.90th=[ 1336], 99.95th=[ 1336], 00:46:01.002 | 99.99th=[ 1336] 00:46:01.002 write: IOPS=540, BW=2162KiB/s (2214kB/s)(2164KiB/1001msec); 0 zone resets 00:46:01.002 slat (nsec): min=9663, max=63618, avg=29003.35, stdev=8911.98 00:46:01.002 clat (usec): min=189, max=984, avg=681.75, stdev=107.31 00:46:01.002 lat (usec): min=200, max=995, avg=710.76, stdev=110.74 00:46:01.002 clat percentiles (usec): 00:46:01.002 | 1.00th=[ 392], 5.00th=[ 482], 10.00th=[ 529], 20.00th=[ 603], 00:46:01.002 | 30.00th=[ 644], 40.00th=[ 676], 50.00th=[ 693], 60.00th=[ 725], 00:46:01.002 | 70.00th=[ 758], 80.00th=[ 775], 90.00th=[ 791], 95.00th=[ 816], 00:46:01.002 | 99.00th=[ 865], 99.50th=[ 881], 99.90th=[ 988], 99.95th=[ 988], 00:46:01.002 | 99.99th=[ 988] 00:46:01.002 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:46:01.002 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:46:01.002 lat (usec) : 250=0.09%, 500=3.23%, 750=31.91%, 1000=20.99% 00:46:01.002 lat (msec) : 2=43.78% 00:46:01.002 cpu : usr=1.20%, sys=3.30%, ctx=1053, majf=0, minf=1 00:46:01.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:01.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:01.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:01.002 issued rwts: total=512,541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:01.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:01.002 00:46:01.002 Run status group 0 (all jobs): 00:46:01.002 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:46:01.002 WRITE: bw=2162KiB/s (2214kB/s), 2162KiB/s-2162KiB/s (2214kB/s-2214kB/s), io=2164KiB (2216kB), run=1001-1001msec 00:46:01.002 00:46:01.002 Disk stats (read/write): 00:46:01.002 nvme0n1: ios=495/512, merge=0/0, ticks=536/342, in_queue=878, util=92.38% 00:46:01.263 14:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:46:01.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:46:01.524 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:46:01.524 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:46:01.524 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:46:01.524 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:01.524 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:46:01.524 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:01.524 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:46:01.524 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:46:01.524 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:46:01.524 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:46:01.524 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:46:01.524 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:01.524 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:46:01.524 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:01.524 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:01.524 rmmod nvme_tcp 00:46:01.524 rmmod nvme_fabrics 00:46:01.785 rmmod nvme_keyring 00:46:01.785 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:01.785 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:46:01.785 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:46:01.785 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 3386507 ']' 00:46:01.785 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 3386507 00:46:01.785 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3386507 ']' 00:46:01.785 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3386507 00:46:01.785 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:46:01.785 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:01.785 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3386507 00:46:01.785 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:01.785 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:01.785 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3386507' 00:46:01.785 killing process with pid 3386507 00:46:01.785 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3386507 00:46:01.785 14:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3386507 00:46:02.725 14:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:46:02.725 14:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:46:02.725 14:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:46:02.725 14:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:46:02.726 14:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:46:02.726 14:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:46:02.726 14:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:46:02.726 14:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:02.726 14:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:02.726 14:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:02.726 14:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:02.726 14:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:05.272 00:46:05.272 real 0m16.691s 00:46:05.272 user 0m37.503s 00:46:05.272 sys 0m7.501s 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:46:05.272 ************************************ 00:46:05.272 END TEST nvmf_nmic 00:46:05.272 ************************************ 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:46:05.272 ************************************ 00:46:05.272 START TEST nvmf_fio_target 00:46:05.272 ************************************ 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:46:05.272 * Looking for test storage... 00:46:05.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:46:05.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:05.272 --rc genhtml_branch_coverage=1 00:46:05.272 --rc genhtml_function_coverage=1 00:46:05.272 --rc genhtml_legend=1 00:46:05.272 --rc geninfo_all_blocks=1 00:46:05.272 --rc geninfo_unexecuted_blocks=1 00:46:05.272 00:46:05.272 ' 00:46:05.272 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:46:05.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:05.272 --rc genhtml_branch_coverage=1 00:46:05.272 --rc genhtml_function_coverage=1 00:46:05.272 --rc genhtml_legend=1 00:46:05.272 --rc geninfo_all_blocks=1 00:46:05.273 --rc geninfo_unexecuted_blocks=1 00:46:05.273 00:46:05.273 ' 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:46:05.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:05.273 --rc genhtml_branch_coverage=1 00:46:05.273 --rc genhtml_function_coverage=1 00:46:05.273 --rc genhtml_legend=1 00:46:05.273 --rc geninfo_all_blocks=1 00:46:05.273 --rc geninfo_unexecuted_blocks=1 00:46:05.273 00:46:05.273 ' 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:46:05.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:05.273 --rc genhtml_branch_coverage=1 00:46:05.273 --rc genhtml_function_coverage=1 00:46:05.273 --rc genhtml_legend=1 00:46:05.273 --rc geninfo_all_blocks=1 00:46:05.273 --rc geninfo_unexecuted_blocks=1 00:46:05.273 00:46:05.273 ' 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:46:05.273 14:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:13.417 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:46:13.418 Found 0000:31:00.0 (0x8086 - 0x159b) 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:46:13.418 Found 0000:31:00.1 (0x8086 - 0x159b) 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:46:13.418 Found net devices under 0000:31:00.0: cvl_0_0 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:46:13.418 Found net devices under 0000:31:00.1: cvl_0_1 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:13.418 14:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:13.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:13.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:46:13.418 00:46:13.418 --- 10.0.0.2 ping statistics --- 00:46:13.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:13.418 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:13.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:13.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:46:13.418 00:46:13.418 --- 10.0.0.1 ping statistics --- 00:46:13.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:13.418 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=3392241 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 3392241 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3392241 ']' 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:13.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:13.418 14:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:46:13.418 [2024-10-07 14:56:36.344780] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:46:13.418 [2024-10-07 14:56:36.347149] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:46:13.418 [2024-10-07 14:56:36.347236] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:13.418 [2024-10-07 14:56:36.472231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:13.418 [2024-10-07 14:56:36.654988] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:13.418 [2024-10-07 14:56:36.655042] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:13.418 [2024-10-07 14:56:36.655056] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:13.418 [2024-10-07 14:56:36.655066] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:13.419 [2024-10-07 14:56:36.655079] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:13.419 [2024-10-07 14:56:36.657585] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:46:13.419 [2024-10-07 14:56:36.657666] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:46:13.419 [2024-10-07 14:56:36.657781] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:46:13.419 [2024-10-07 14:56:36.657807] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:46:13.419 [2024-10-07 14:56:36.917340] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:46:13.419 [2024-10-07 14:56:36.917484] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:46:13.419 [2024-10-07 14:56:36.918708] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:46:13.419 [2024-10-07 14:56:36.918857] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:46:13.419 [2024-10-07 14:56:36.918993] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:46:13.419 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:13.419 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:46:13.419 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:46:13.419 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:13.419 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:46:13.679 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:13.679 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:46:13.679 [2024-10-07 14:56:37.310636] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:13.679 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:13.939 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:46:13.939 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:14.200 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:46:14.200 14:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:14.461 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:46:14.461 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:14.722 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:46:14.722 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:46:14.981 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:15.241 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:46:15.241 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:15.501 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:46:15.501 14:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:46:15.501 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:46:15.501 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:46:15.761 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:46:16.022 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:46:16.022 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:46:16.022 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:46:16.022 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:46:16.283 14:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:16.544 [2024-10-07 14:56:40.022782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:16.544 14:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:46:16.544 14:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:46:16.804 14:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:46:17.374 14:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:46:17.374 14:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:46:17.374 14:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:46:17.374 14:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:46:17.374 14:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:46:17.374 14:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:46:19.283 14:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:46:19.283 14:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:46:19.283 14:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:46:19.283 14:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:46:19.283 14:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:46:19.283 14:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:46:19.283 14:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:46:19.283 [global] 00:46:19.283 thread=1 00:46:19.283 invalidate=1 00:46:19.283 rw=write 00:46:19.283 time_based=1 00:46:19.283 runtime=1 00:46:19.283 ioengine=libaio 00:46:19.283 direct=1 00:46:19.283 bs=4096 00:46:19.283 iodepth=1 00:46:19.283 norandommap=0 00:46:19.283 numjobs=1 00:46:19.283 00:46:19.283 verify_dump=1 00:46:19.283 verify_backlog=512 00:46:19.283 verify_state_save=0 00:46:19.283 do_verify=1 00:46:19.283 verify=crc32c-intel 00:46:19.283 [job0] 00:46:19.283 filename=/dev/nvme0n1 00:46:19.283 [job1] 00:46:19.283 filename=/dev/nvme0n2 00:46:19.283 [job2] 00:46:19.283 filename=/dev/nvme0n3 00:46:19.283 [job3] 00:46:19.283 filename=/dev/nvme0n4 00:46:19.283 Could not set queue depth (nvme0n1) 00:46:19.283 Could not set queue depth (nvme0n2) 00:46:19.283 Could not set queue depth (nvme0n3) 00:46:19.283 Could not set queue depth (nvme0n4) 00:46:19.851 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:19.851 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:19.851 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:19.851 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:19.851 fio-3.35 00:46:19.851 Starting 4 threads 00:46:20.793 00:46:20.793 job0: (groupid=0, jobs=1): err= 0: pid=3393720: Mon Oct 7 14:56:44 2024 00:46:20.793 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:46:20.793 slat (nsec): min=23996, max=52338, avg=25260.04, stdev=2947.33 00:46:20.793 clat (usec): min=754, max=1404, avg=1015.87, stdev=82.52 00:46:20.793 lat (usec): min=779, max=1429, avg=1041.13, stdev=82.39 00:46:20.793 clat percentiles (usec): 00:46:20.793 | 1.00th=[ 791], 5.00th=[ 848], 10.00th=[ 906], 20.00th=[ 963], 00:46:20.793 | 30.00th=[ 996], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1045], 00:46:20.793 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1090], 95.00th=[ 1123], 00:46:20.793 | 99.00th=[ 1188], 99.50th=[ 1270], 99.90th=[ 1401], 99.95th=[ 1401], 00:46:20.793 | 99.99th=[ 1401] 00:46:20.793 write: IOPS=727, BW=2909KiB/s (2979kB/s)(2912KiB/1001msec); 0 zone resets 00:46:20.793 slat (nsec): min=9531, max=63989, avg=27529.70, stdev=9749.29 00:46:20.793 clat (usec): min=232, max=1036, avg=595.93, stdev=116.31 00:46:20.793 lat (usec): min=261, max=1088, avg=623.46, stdev=121.71 00:46:20.793 clat percentiles (usec): 00:46:20.793 | 1.00th=[ 281], 5.00th=[ 367], 10.00th=[ 412], 20.00th=[ 498], 00:46:20.793 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:46:20.794 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 725], 95.00th=[ 750], 00:46:20.794 | 99.00th=[ 799], 99.50th=[ 840], 99.90th=[ 1037], 99.95th=[ 1037], 00:46:20.794 | 99.99th=[ 1037] 00:46:20.794 bw ( KiB/s): min= 4096, max= 4096, per=46.45%, avg=4096.00, stdev= 0.00, samples=1 00:46:20.794 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:46:20.794 lat (usec) : 250=0.08%, 500=11.69%, 750=44.27%, 1000=16.77% 00:46:20.794 lat (msec) : 2=27.18% 00:46:20.794 cpu : usr=1.60%, sys=3.70%, ctx=1245, majf=0, minf=1 00:46:20.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:20.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:20.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:20.794 issued rwts: total=512,728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:20.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:20.794 job1: (groupid=0, jobs=1): err= 0: pid=3393723: Mon Oct 7 14:56:44 2024 00:46:20.794 read: IOPS=28, BW=116KiB/s (119kB/s)(116KiB/1001msec) 00:46:20.794 slat (nsec): min=8532, max=29887, avg=26275.48, stdev=3489.75 00:46:20.794 clat (usec): min=849, max=41847, avg=25854.85, stdev=19734.98 00:46:20.794 lat (usec): min=858, max=41874, avg=25881.13, stdev=19736.04 00:46:20.794 clat percentiles (usec): 00:46:20.794 | 1.00th=[ 848], 5.00th=[ 971], 10.00th=[ 971], 20.00th=[ 1057], 00:46:20.794 | 30.00th=[ 1156], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:46:20.794 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:46:20.794 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:46:20.794 | 99.99th=[41681] 00:46:20.794 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:46:20.794 slat (nsec): min=9808, max=53714, avg=30508.09, stdev=10045.46 00:46:20.794 clat (usec): min=217, max=699, avg=441.57, stdev=70.73 00:46:20.794 lat (usec): min=252, max=710, avg=472.08, stdev=73.93 00:46:20.794 clat percentiles (usec): 00:46:20.794 | 1.00th=[ 249], 5.00th=[ 330], 10.00th=[ 351], 20.00th=[ 375], 00:46:20.794 | 30.00th=[ 408], 40.00th=[ 437], 50.00th=[ 453], 60.00th=[ 469], 00:46:20.794 | 70.00th=[ 486], 80.00th=[ 498], 90.00th=[ 519], 95.00th=[ 537], 00:46:20.794 | 99.00th=[ 586], 99.50th=[ 603], 99.90th=[ 701], 99.95th=[ 701], 00:46:20.794 | 99.99th=[ 701] 00:46:20.794 bw ( KiB/s): min= 4096, max= 4096, per=46.45%, avg=4096.00, stdev= 0.00, samples=1 00:46:20.794 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:46:20.794 lat (usec) : 250=1.11%, 500=76.71%, 750=16.82%, 1000=0.74% 00:46:20.794 lat (msec) : 2=1.29%, 50=3.33% 00:46:20.794 cpu : usr=0.70%, sys=1.60%, ctx=543, majf=0, minf=1 00:46:20.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:20.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:20.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:20.794 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:20.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:20.794 job2: (groupid=0, jobs=1): err= 0: pid=3393736: Mon Oct 7 14:56:44 2024 00:46:20.794 read: IOPS=15, BW=62.3KiB/s (63.8kB/s)(64.0KiB/1027msec) 00:46:20.794 slat (nsec): min=26137, max=26767, avg=26422.44, stdev=175.50 00:46:20.794 clat (usec): min=1282, max=42344, avg=39399.32, stdev=10165.76 00:46:20.794 lat (usec): min=1308, max=42371, avg=39425.75, stdev=10165.79 00:46:20.794 clat percentiles (usec): 00:46:20.794 | 1.00th=[ 1287], 5.00th=[ 1287], 10.00th=[41681], 20.00th=[41681], 00:46:20.794 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:46:20.794 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:46:20.794 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:46:20.794 | 99.99th=[42206] 00:46:20.794 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:46:20.794 slat (nsec): min=10467, max=61457, avg=32399.66, stdev=9241.93 00:46:20.794 clat (usec): min=158, max=996, avg=724.91, stdev=142.51 00:46:20.794 lat (usec): min=170, max=1046, avg=757.31, stdev=144.97 00:46:20.794 clat percentiles (usec): 00:46:20.794 | 1.00th=[ 326], 5.00th=[ 453], 10.00th=[ 529], 20.00th=[ 611], 00:46:20.794 | 30.00th=[ 668], 40.00th=[ 717], 50.00th=[ 750], 60.00th=[ 783], 00:46:20.794 | 70.00th=[ 824], 80.00th=[ 848], 90.00th=[ 889], 95.00th=[ 914], 00:46:20.794 | 99.00th=[ 955], 99.50th=[ 996], 99.90th=[ 996], 99.95th=[ 996], 00:46:20.794 | 99.99th=[ 996] 00:46:20.794 bw ( KiB/s): min= 4096, max= 4096, per=46.45%, avg=4096.00, stdev= 0.00, samples=1 00:46:20.794 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:46:20.794 lat (usec) : 250=0.19%, 500=7.20%, 750=42.05%, 1000=47.54% 00:46:20.794 lat (msec) : 2=0.19%, 50=2.84% 00:46:20.794 cpu : usr=0.58%, sys=1.66%, ctx=531, majf=0, minf=1 00:46:20.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:20.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:20.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:20.794 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:20.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:20.794 job3: (groupid=0, jobs=1): err= 0: pid=3393742: Mon Oct 7 14:56:44 2024 00:46:20.794 read: IOPS=504, BW=2018KiB/s (2066kB/s)(2020KiB/1001msec) 00:46:20.794 slat (nsec): min=26624, max=64504, avg=27879.39, stdev=3346.75 00:46:20.794 clat (usec): min=778, max=41513, avg=1366.06, stdev=3554.81 00:46:20.794 lat (usec): min=806, max=41542, avg=1393.94, stdev=3554.90 00:46:20.794 clat percentiles (usec): 00:46:20.794 | 1.00th=[ 832], 5.00th=[ 914], 10.00th=[ 955], 20.00th=[ 1004], 00:46:20.794 | 30.00th=[ 1029], 40.00th=[ 1045], 50.00th=[ 1057], 60.00th=[ 1074], 00:46:20.794 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1123], 95.00th=[ 1156], 00:46:20.794 | 99.00th=[ 1237], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:46:20.794 | 99.99th=[41681] 00:46:20.794 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:46:20.794 slat (usec): min=9, max=41590, avg=165.58, stdev=2201.14 00:46:20.794 clat (usec): min=178, max=813, avg=387.74, stdev=118.69 00:46:20.794 lat (usec): min=188, max=41986, avg=553.32, stdev=2204.78 00:46:20.794 clat percentiles (usec): 00:46:20.794 | 1.00th=[ 200], 5.00th=[ 237], 10.00th=[ 249], 20.00th=[ 273], 00:46:20.794 | 30.00th=[ 314], 40.00th=[ 347], 50.00th=[ 367], 60.00th=[ 400], 00:46:20.794 | 70.00th=[ 453], 80.00th=[ 490], 90.00th=[ 545], 95.00th=[ 611], 00:46:20.794 | 99.00th=[ 701], 99.50th=[ 791], 99.90th=[ 816], 99.95th=[ 816], 00:46:20.794 | 99.99th=[ 816] 00:46:20.794 bw ( KiB/s): min= 4096, max= 4096, per=46.45%, avg=4096.00, stdev= 0.00, samples=1 00:46:20.794 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:46:20.794 lat (usec) : 250=5.11%, 500=36.38%, 750=8.46%, 1000=9.83% 00:46:20.794 lat (msec) : 2=39.82%, 50=0.39% 00:46:20.794 cpu : usr=1.50%, sys=3.70%, ctx=1021, majf=0, minf=1 00:46:20.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:20.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:20.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:20.794 issued rwts: total=505,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:20.794 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:20.794 00:46:20.794 Run status group 0 (all jobs): 00:46:20.794 READ: bw=4136KiB/s (4236kB/s), 62.3KiB/s-2046KiB/s (63.8kB/s-2095kB/s), io=4248KiB (4350kB), run=1001-1027msec 00:46:20.794 WRITE: bw=8818KiB/s (9030kB/s), 1994KiB/s-2909KiB/s (2042kB/s-2979kB/s), io=9056KiB (9273kB), run=1001-1027msec 00:46:20.794 00:46:20.794 Disk stats (read/write): 00:46:20.794 nvme0n1: ios=533/512, merge=0/0, ticks=532/292, in_queue=824, util=86.77% 00:46:20.794 nvme0n2: ios=63/512, merge=0/0, ticks=1138/226, in_queue=1364, util=87.33% 00:46:20.794 nvme0n3: ios=33/512, merge=0/0, ticks=1301/348, in_queue=1649, util=91.43% 00:46:20.794 nvme0n4: ios=406/512, merge=0/0, ticks=1110/154, in_queue=1264, util=96.14% 00:46:21.054 14:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:46:21.054 [global] 00:46:21.054 thread=1 00:46:21.054 invalidate=1 00:46:21.054 rw=randwrite 00:46:21.054 time_based=1 00:46:21.054 runtime=1 00:46:21.054 ioengine=libaio 00:46:21.054 direct=1 00:46:21.054 bs=4096 00:46:21.054 iodepth=1 00:46:21.054 norandommap=0 00:46:21.054 numjobs=1 00:46:21.054 00:46:21.054 verify_dump=1 00:46:21.054 verify_backlog=512 00:46:21.054 verify_state_save=0 00:46:21.054 do_verify=1 00:46:21.054 verify=crc32c-intel 00:46:21.054 [job0] 00:46:21.054 filename=/dev/nvme0n1 00:46:21.054 [job1] 00:46:21.054 filename=/dev/nvme0n2 00:46:21.054 [job2] 00:46:21.054 filename=/dev/nvme0n3 00:46:21.054 [job3] 00:46:21.054 filename=/dev/nvme0n4 00:46:21.054 Could not set queue depth (nvme0n1) 00:46:21.054 Could not set queue depth (nvme0n2) 00:46:21.054 Could not set queue depth (nvme0n3) 00:46:21.054 Could not set queue depth (nvme0n4) 00:46:21.314 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:21.314 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:21.314 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:21.314 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:21.314 fio-3.35 00:46:21.314 Starting 4 threads 00:46:22.697 00:46:22.697 job0: (groupid=0, jobs=1): err= 0: pid=3394224: Mon Oct 7 14:56:46 2024 00:46:22.697 read: IOPS=18, BW=74.4KiB/s (76.2kB/s)(76.0KiB/1021msec) 00:46:22.697 slat (nsec): min=26316, max=27354, avg=26682.26, stdev=273.57 00:46:22.697 clat (usec): min=986, max=41939, avg=38915.20, stdev=9188.10 00:46:22.697 lat (usec): min=1012, max=41966, avg=38941.88, stdev=9188.13 00:46:22.697 clat percentiles (usec): 00:46:22.697 | 1.00th=[ 988], 5.00th=[ 988], 10.00th=[40633], 20.00th=[40633], 00:46:22.697 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:46:22.697 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:46:22.697 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:46:22.697 | 99.99th=[41681] 00:46:22.697 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:46:22.697 slat (nsec): min=9598, max=54417, avg=30589.55, stdev=9422.76 00:46:22.697 clat (usec): min=131, max=1536, avg=507.70, stdev=143.94 00:46:22.697 lat (usec): min=143, max=1546, avg=538.29, stdev=145.95 00:46:22.697 clat percentiles (usec): 00:46:22.697 | 1.00th=[ 219], 5.00th=[ 281], 10.00th=[ 330], 20.00th=[ 383], 00:46:22.697 | 30.00th=[ 424], 40.00th=[ 465], 50.00th=[ 510], 60.00th=[ 545], 00:46:22.697 | 70.00th=[ 578], 80.00th=[ 627], 90.00th=[ 693], 95.00th=[ 725], 00:46:22.697 | 99.00th=[ 791], 99.50th=[ 824], 99.90th=[ 1532], 99.95th=[ 1532], 00:46:22.697 | 99.99th=[ 1532] 00:46:22.697 bw ( KiB/s): min= 4096, max= 4096, per=46.27%, avg=4096.00, stdev= 0.00, samples=1 00:46:22.697 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:46:22.697 lat (usec) : 250=2.64%, 500=42.75%, 750=48.59%, 1000=2.26% 00:46:22.697 lat (msec) : 2=0.38%, 50=3.39% 00:46:22.697 cpu : usr=0.69%, sys=1.67%, ctx=532, majf=0, minf=1 00:46:22.697 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:22.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:22.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:22.697 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:22.697 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:22.697 job1: (groupid=0, jobs=1): err= 0: pid=3394225: Mon Oct 7 14:56:46 2024 00:46:22.697 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:46:22.697 slat (nsec): min=7299, max=60251, avg=25057.52, stdev=2655.63 00:46:22.697 clat (usec): min=669, max=1194, avg=976.27, stdev=69.68 00:46:22.697 lat (usec): min=694, max=1218, avg=1001.33, stdev=69.67 00:46:22.697 clat percentiles (usec): 00:46:22.697 | 1.00th=[ 807], 5.00th=[ 840], 10.00th=[ 881], 20.00th=[ 922], 00:46:22.697 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 996], 00:46:22.697 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1074], 00:46:22.697 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1188], 99.95th=[ 1188], 00:46:22.697 | 99.99th=[ 1188] 00:46:22.697 write: IOPS=767, BW=3069KiB/s (3143kB/s)(3072KiB/1001msec); 0 zone resets 00:46:22.698 slat (nsec): min=9402, max=61704, avg=28306.07, stdev=8832.85 00:46:22.698 clat (usec): min=218, max=1005, avg=593.42, stdev=126.54 00:46:22.698 lat (usec): min=228, max=1056, avg=621.72, stdev=129.97 00:46:22.698 clat percentiles (usec): 00:46:22.698 | 1.00th=[ 285], 5.00th=[ 367], 10.00th=[ 424], 20.00th=[ 490], 00:46:22.698 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 627], 00:46:22.698 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 791], 00:46:22.698 | 99.00th=[ 857], 99.50th=[ 873], 99.90th=[ 1004], 99.95th=[ 1004], 00:46:22.698 | 99.99th=[ 1004] 00:46:22.698 bw ( KiB/s): min= 4096, max= 4096, per=46.27%, avg=4096.00, stdev= 0.00, samples=1 00:46:22.698 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:46:22.698 lat (usec) : 250=0.08%, 500=13.36%, 750=40.31%, 1000=30.86% 00:46:22.698 lat (msec) : 2=15.39% 00:46:22.698 cpu : usr=1.70%, sys=3.80%, ctx=1280, majf=0, minf=1 00:46:22.698 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:22.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:22.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:22.698 issued rwts: total=512,768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:22.698 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:22.698 job2: (groupid=0, jobs=1): err= 0: pid=3394226: Mon Oct 7 14:56:46 2024 00:46:22.698 read: IOPS=16, BW=65.3KiB/s (66.9kB/s)(68.0KiB/1041msec) 00:46:22.698 slat (nsec): min=26882, max=28562, avg=27408.82, stdev=410.90 00:46:22.698 clat (usec): min=40979, max=42039, avg=41902.24, stdev=249.46 00:46:22.698 lat (usec): min=41006, max=42066, avg=41929.65, stdev=249.45 00:46:22.698 clat percentiles (usec): 00:46:22.698 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:46:22.698 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:46:22.698 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:46:22.698 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:46:22.698 | 99.99th=[42206] 00:46:22.698 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:46:22.698 slat (nsec): min=9101, max=67401, avg=29829.66, stdev=9932.39 00:46:22.698 clat (usec): min=235, max=1164, avg=603.90, stdev=124.90 00:46:22.698 lat (usec): min=244, max=1198, avg=633.73, stdev=129.58 00:46:22.698 clat percentiles (usec): 00:46:22.698 | 1.00th=[ 285], 5.00th=[ 375], 10.00th=[ 429], 20.00th=[ 506], 00:46:22.698 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 652], 00:46:22.698 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 783], 00:46:22.698 | 99.00th=[ 848], 99.50th=[ 873], 99.90th=[ 1172], 99.95th=[ 1172], 00:46:22.698 | 99.99th=[ 1172] 00:46:22.698 bw ( KiB/s): min= 4096, max= 4096, per=46.27%, avg=4096.00, stdev= 0.00, samples=1 00:46:22.698 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:46:22.698 lat (usec) : 250=0.19%, 500=18.34%, 750=70.13%, 1000=7.94% 00:46:22.698 lat (msec) : 2=0.19%, 50=3.21% 00:46:22.698 cpu : usr=1.06%, sys=1.83%, ctx=529, majf=0, minf=1 00:46:22.698 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:22.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:22.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:22.698 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:22.698 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:22.698 job3: (groupid=0, jobs=1): err= 0: pid=3394230: Mon Oct 7 14:56:46 2024 00:46:22.698 read: IOPS=17, BW=70.9KiB/s (72.6kB/s)(72.0KiB/1016msec) 00:46:22.698 slat (nsec): min=26033, max=27026, avg=26394.22, stdev=284.29 00:46:22.698 clat (usec): min=41160, max=42033, avg=41915.42, stdev=194.67 00:46:22.698 lat (usec): min=41186, max=42060, avg=41941.81, stdev=194.78 00:46:22.698 clat percentiles (usec): 00:46:22.698 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:46:22.698 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:46:22.698 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:46:22.698 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:46:22.698 | 99.99th=[42206] 00:46:22.698 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:46:22.698 slat (nsec): min=9673, max=54719, avg=31954.38, stdev=8378.31 00:46:22.698 clat (usec): min=167, max=862, avg=465.45, stdev=121.54 00:46:22.698 lat (usec): min=176, max=895, avg=497.41, stdev=123.80 00:46:22.698 clat percentiles (usec): 00:46:22.698 | 1.00th=[ 217], 5.00th=[ 297], 10.00th=[ 326], 20.00th=[ 347], 00:46:22.698 | 30.00th=[ 379], 40.00th=[ 433], 50.00th=[ 465], 60.00th=[ 490], 00:46:22.698 | 70.00th=[ 529], 80.00th=[ 578], 90.00th=[ 627], 95.00th=[ 668], 00:46:22.698 | 99.00th=[ 758], 99.50th=[ 775], 99.90th=[ 865], 99.95th=[ 865], 00:46:22.698 | 99.99th=[ 865] 00:46:22.698 bw ( KiB/s): min= 4096, max= 4096, per=46.27%, avg=4096.00, stdev= 0.00, samples=1 00:46:22.698 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:46:22.698 lat (usec) : 250=3.02%, 500=58.49%, 750=33.77%, 1000=1.32% 00:46:22.698 lat (msec) : 50=3.40% 00:46:22.698 cpu : usr=0.69%, sys=1.77%, ctx=533, majf=0, minf=1 00:46:22.698 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:22.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:22.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:22.698 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:22.698 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:22.698 00:46:22.698 Run status group 0 (all jobs): 00:46:22.698 READ: bw=2175KiB/s (2227kB/s), 65.3KiB/s-2046KiB/s (66.9kB/s-2095kB/s), io=2264KiB (2318kB), run=1001-1041msec 00:46:22.698 WRITE: bw=8853KiB/s (9065kB/s), 1967KiB/s-3069KiB/s (2015kB/s-3143kB/s), io=9216KiB (9437kB), run=1001-1041msec 00:46:22.698 00:46:22.698 Disk stats (read/write): 00:46:22.698 nvme0n1: ios=70/512, merge=0/0, ticks=648/246, in_queue=894, util=91.58% 00:46:22.698 nvme0n2: ios=559/512, merge=0/0, ticks=609/282, in_queue=891, util=92.25% 00:46:22.698 nvme0n3: ios=69/512, merge=0/0, ticks=590/243, in_queue=833, util=94.72% 00:46:22.698 nvme0n4: ios=55/512, merge=0/0, ticks=1018/231, in_queue=1249, util=98.61% 00:46:22.698 14:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:46:22.698 [global] 00:46:22.698 thread=1 00:46:22.698 invalidate=1 00:46:22.698 rw=write 00:46:22.698 time_based=1 00:46:22.698 runtime=1 00:46:22.698 ioengine=libaio 00:46:22.698 direct=1 00:46:22.698 bs=4096 00:46:22.698 iodepth=128 00:46:22.698 norandommap=0 00:46:22.698 numjobs=1 00:46:22.698 00:46:22.698 verify_dump=1 00:46:22.698 verify_backlog=512 00:46:22.698 verify_state_save=0 00:46:22.698 do_verify=1 00:46:22.698 verify=crc32c-intel 00:46:22.698 [job0] 00:46:22.698 filename=/dev/nvme0n1 00:46:22.698 [job1] 00:46:22.698 filename=/dev/nvme0n2 00:46:22.698 [job2] 00:46:22.698 filename=/dev/nvme0n3 00:46:22.698 [job3] 00:46:22.698 filename=/dev/nvme0n4 00:46:22.698 Could not set queue depth (nvme0n1) 00:46:22.698 Could not set queue depth (nvme0n2) 00:46:22.698 Could not set queue depth (nvme0n3) 00:46:22.698 Could not set queue depth (nvme0n4) 00:46:22.958 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:22.958 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:22.958 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:22.958 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:22.958 fio-3.35 00:46:22.958 Starting 4 threads 00:46:24.342 00:46:24.342 job0: (groupid=0, jobs=1): err= 0: pid=3394748: Mon Oct 7 14:56:47 2024 00:46:24.342 read: IOPS=6192, BW=24.2MiB/s (25.4MB/s)(24.3MiB/1005msec) 00:46:24.342 slat (nsec): min=996, max=8953.2k, avg=77960.39, stdev=587440.70 00:46:24.342 clat (usec): min=4122, max=21548, avg=10271.82, stdev=2382.68 00:46:24.342 lat (usec): min=4308, max=21554, avg=10349.78, stdev=2412.68 00:46:24.342 clat percentiles (usec): 00:46:24.342 | 1.00th=[ 5866], 5.00th=[ 7177], 10.00th=[ 7767], 20.00th=[ 8455], 00:46:24.342 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[10028], 00:46:24.342 | 70.00th=[10814], 80.00th=[12649], 90.00th=[14091], 95.00th=[14615], 00:46:24.342 | 99.00th=[16712], 99.50th=[17433], 99.90th=[18744], 99.95th=[19006], 00:46:24.342 | 99.99th=[21627] 00:46:24.342 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:46:24.342 slat (nsec): min=1730, max=8098.2k, avg=71053.31, stdev=508269.25 00:46:24.342 clat (usec): min=1166, max=18642, avg=9476.90, stdev=2377.88 00:46:24.342 lat (usec): min=1175, max=18646, avg=9547.95, stdev=2391.46 00:46:24.342 clat percentiles (usec): 00:46:24.342 | 1.00th=[ 4948], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 7308], 00:46:24.342 | 30.00th=[ 8029], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[10028], 00:46:24.342 | 70.00th=[10421], 80.00th=[11600], 90.00th=[13042], 95.00th=[13304], 00:46:24.342 | 99.00th=[15401], 99.50th=[15926], 99.90th=[17695], 99.95th=[17957], 00:46:24.342 | 99.99th=[18744] 00:46:24.342 bw ( KiB/s): min=25696, max=27168, per=28.66%, avg=26432.00, stdev=1040.86, samples=2 00:46:24.342 iops : min= 6424, max= 6792, avg=6608.00, stdev=260.22, samples=2 00:46:24.342 lat (msec) : 2=0.07%, 4=0.10%, 10=59.55%, 20=40.27%, 50=0.01% 00:46:24.342 cpu : usr=4.58%, sys=7.27%, ctx=411, majf=0, minf=1 00:46:24.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:46:24.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:24.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:24.342 issued rwts: total=6223,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:24.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:24.342 job1: (groupid=0, jobs=1): err= 0: pid=3394749: Mon Oct 7 14:56:47 2024 00:46:24.342 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:46:24.342 slat (nsec): min=973, max=13571k, avg=244200.58, stdev=1324494.79 00:46:24.342 clat (usec): min=9791, max=66274, avg=31864.04, stdev=13040.60 00:46:24.342 lat (usec): min=9797, max=69799, avg=32108.24, stdev=13155.47 00:46:24.342 clat percentiles (usec): 00:46:24.342 | 1.00th=[11600], 5.00th=[12911], 10.00th=[13829], 20.00th=[15401], 00:46:24.342 | 30.00th=[23987], 40.00th=[28705], 50.00th=[32637], 60.00th=[36963], 00:46:24.342 | 70.00th=[40633], 80.00th=[44827], 90.00th=[49021], 95.00th=[51119], 00:46:24.342 | 99.00th=[58459], 99.50th=[61604], 99.90th=[66323], 99.95th=[66323], 00:46:24.342 | 99.99th=[66323] 00:46:24.342 write: IOPS=2226, BW=8906KiB/s (9119kB/s)(8968KiB/1007msec); 0 zone resets 00:46:24.342 slat (nsec): min=1619, max=19358k, avg=215700.70, stdev=1340429.73 00:46:24.342 clat (usec): min=2556, max=78944, avg=25221.47, stdev=15458.34 00:46:24.342 lat (usec): min=3731, max=78952, avg=25437.17, stdev=15610.53 00:46:24.342 clat percentiles (usec): 00:46:24.342 | 1.00th=[ 4293], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:46:24.342 | 30.00th=[11600], 40.00th=[13042], 50.00th=[27132], 60.00th=[29230], 00:46:24.342 | 70.00th=[32113], 80.00th=[35914], 90.00th=[44827], 95.00th=[55313], 00:46:24.342 | 99.00th=[71828], 99.50th=[76022], 99.90th=[79168], 99.95th=[79168], 00:46:24.342 | 99.99th=[79168] 00:46:24.342 bw ( KiB/s): min= 8192, max= 8720, per=9.17%, avg=8456.00, stdev=373.35, samples=2 00:46:24.342 iops : min= 2048, max= 2180, avg=2114.00, stdev=93.34, samples=2 00:46:24.342 lat (msec) : 4=0.51%, 10=2.26%, 20=32.94%, 50=58.39%, 100=5.90% 00:46:24.342 cpu : usr=1.79%, sys=2.68%, ctx=165, majf=0, minf=2 00:46:24.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:46:24.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:24.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:24.342 issued rwts: total=2048,2242,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:24.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:24.342 job2: (groupid=0, jobs=1): err= 0: pid=3394750: Mon Oct 7 14:56:47 2024 00:46:24.342 read: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec) 00:46:24.342 slat (nsec): min=1018, max=8452.8k, avg=75001.87, stdev=533323.18 00:46:24.342 clat (usec): min=4336, max=25128, avg=10001.17, stdev=2644.72 00:46:24.342 lat (usec): min=4342, max=25130, avg=10076.17, stdev=2677.62 00:46:24.342 clat percentiles (usec): 00:46:24.342 | 1.00th=[ 5080], 5.00th=[ 6718], 10.00th=[ 7177], 20.00th=[ 8225], 00:46:24.342 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:46:24.342 | 70.00th=[10421], 80.00th=[11863], 90.00th=[13435], 95.00th=[14746], 00:46:24.342 | 99.00th=[20055], 99.50th=[21890], 99.90th=[24511], 99.95th=[25035], 00:46:24.342 | 99.99th=[25035] 00:46:24.342 write: IOPS=6598, BW=25.8MiB/s (27.0MB/s)(25.9MiB/1006msec); 0 zone resets 00:46:24.342 slat (nsec): min=1714, max=7133.4k, avg=76149.42, stdev=511265.53 00:46:24.342 clat (usec): min=2256, max=25126, avg=9956.96, stdev=3921.93 00:46:24.342 lat (usec): min=2310, max=25130, avg=10033.11, stdev=3937.54 00:46:24.342 clat percentiles (usec): 00:46:24.342 | 1.00th=[ 4080], 5.00th=[ 5604], 10.00th=[ 5866], 20.00th=[ 6849], 00:46:24.342 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9503], 00:46:24.342 | 70.00th=[11469], 80.00th=[12387], 90.00th=[16450], 95.00th=[19006], 00:46:24.342 | 99.00th=[21627], 99.50th=[21890], 99.90th=[23462], 99.95th=[24511], 00:46:24.343 | 99.99th=[25035] 00:46:24.343 bw ( KiB/s): min=25328, max=26752, per=28.24%, avg=26040.00, stdev=1006.92, samples=2 00:46:24.343 iops : min= 6332, max= 6688, avg=6510.00, stdev=251.73, samples=2 00:46:24.343 lat (msec) : 4=0.34%, 10=63.14%, 20=34.43%, 50=2.09% 00:46:24.343 cpu : usr=4.88%, sys=7.26%, ctx=361, majf=0, minf=2 00:46:24.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:46:24.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:24.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:24.343 issued rwts: total=6144,6638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:24.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:24.343 job3: (groupid=0, jobs=1): err= 0: pid=3394751: Mon Oct 7 14:56:47 2024 00:46:24.343 read: IOPS=7360, BW=28.8MiB/s (30.1MB/s)(29.0MiB/1007msec) 00:46:24.343 slat (nsec): min=1016, max=8781.1k, avg=63120.89, stdev=461237.46 00:46:24.343 clat (usec): min=3029, max=20657, avg=8373.53, stdev=2359.30 00:46:24.343 lat (usec): min=3840, max=20665, avg=8436.65, stdev=2375.95 00:46:24.343 clat percentiles (usec): 00:46:24.343 | 1.00th=[ 5080], 5.00th=[ 5342], 10.00th=[ 5800], 20.00th=[ 6456], 00:46:24.343 | 30.00th=[ 7111], 40.00th=[ 7439], 50.00th=[ 7701], 60.00th=[ 8160], 00:46:24.343 | 70.00th=[ 9372], 80.00th=[10421], 90.00th=[11469], 95.00th=[12125], 00:46:24.343 | 99.00th=[17695], 99.50th=[17957], 99.90th=[20579], 99.95th=[20579], 00:46:24.343 | 99.99th=[20579] 00:46:24.343 write: IOPS=7626, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1007msec); 0 zone resets 00:46:24.343 slat (nsec): min=1737, max=42547k, avg=64320.14, stdev=626023.78 00:46:24.343 clat (usec): min=1407, max=64338, avg=7586.79, stdev=1961.14 00:46:24.343 lat (usec): min=1421, max=64386, avg=7651.11, stdev=2067.27 00:46:24.343 clat percentiles (usec): 00:46:24.343 | 1.00th=[ 3326], 5.00th=[ 4948], 10.00th=[ 5276], 20.00th=[ 6063], 00:46:24.343 | 30.00th=[ 6915], 40.00th=[ 7308], 50.00th=[ 7635], 60.00th=[ 7963], 00:46:24.343 | 70.00th=[ 8160], 80.00th=[ 8455], 90.00th=[10159], 95.00th=[10552], 00:46:24.343 | 99.00th=[11076], 99.50th=[11076], 99.90th=[14877], 99.95th=[14877], 00:46:24.343 | 99.99th=[64226] 00:46:24.343 bw ( KiB/s): min=28672, max=32768, per=33.31%, avg=30720.00, stdev=2896.31, samples=2 00:46:24.343 iops : min= 7168, max= 8192, avg=7680.00, stdev=724.08, samples=2 00:46:24.343 lat (msec) : 2=0.17%, 4=0.81%, 10=80.89%, 20=18.04%, 50=0.07% 00:46:24.343 lat (msec) : 100=0.01% 00:46:24.343 cpu : usr=5.67%, sys=7.85%, ctx=594, majf=0, minf=1 00:46:24.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:46:24.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:24.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:24.343 issued rwts: total=7412,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:24.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:24.343 00:46:24.343 Run status group 0 (all jobs): 00:46:24.343 READ: bw=84.7MiB/s (88.8MB/s), 8135KiB/s-28.8MiB/s (8330kB/s-30.1MB/s), io=85.3MiB (89.4MB), run=1005-1007msec 00:46:24.343 WRITE: bw=90.1MiB/s (94.4MB/s), 8906KiB/s-29.8MiB/s (9119kB/s-31.2MB/s), io=90.7MiB (95.1MB), run=1005-1007msec 00:46:24.343 00:46:24.343 Disk stats (read/write): 00:46:24.343 nvme0n1: ios=5147/5632, merge=0/0, ticks=50580/50728, in_queue=101308, util=84.47% 00:46:24.343 nvme0n2: ios=1874/2048, merge=0/0, ticks=18216/16196, in_queue=34412, util=90.93% 00:46:24.343 nvme0n3: ios=5181/5201, merge=0/0, ticks=49639/52108, in_queue=101747, util=92.83% 00:46:24.343 nvme0n4: ios=6191/6215, merge=0/0, ticks=49321/44685, in_queue=94006, util=96.05% 00:46:24.343 14:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:46:24.343 [global] 00:46:24.343 thread=1 00:46:24.343 invalidate=1 00:46:24.343 rw=randwrite 00:46:24.343 time_based=1 00:46:24.343 runtime=1 00:46:24.343 ioengine=libaio 00:46:24.343 direct=1 00:46:24.343 bs=4096 00:46:24.343 iodepth=128 00:46:24.343 norandommap=0 00:46:24.343 numjobs=1 00:46:24.343 00:46:24.343 verify_dump=1 00:46:24.343 verify_backlog=512 00:46:24.343 verify_state_save=0 00:46:24.343 do_verify=1 00:46:24.343 verify=crc32c-intel 00:46:24.343 [job0] 00:46:24.343 filename=/dev/nvme0n1 00:46:24.343 [job1] 00:46:24.343 filename=/dev/nvme0n2 00:46:24.343 [job2] 00:46:24.343 filename=/dev/nvme0n3 00:46:24.343 [job3] 00:46:24.343 filename=/dev/nvme0n4 00:46:24.343 Could not set queue depth (nvme0n1) 00:46:24.343 Could not set queue depth (nvme0n2) 00:46:24.343 Could not set queue depth (nvme0n3) 00:46:24.343 Could not set queue depth (nvme0n4) 00:46:24.603 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:24.603 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:24.603 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:24.603 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:46:24.603 fio-3.35 00:46:24.603 Starting 4 threads 00:46:26.045 00:46:26.045 job0: (groupid=0, jobs=1): err= 0: pid=3395266: Mon Oct 7 14:56:49 2024 00:46:26.045 read: IOPS=2468, BW=9873KiB/s (10.1MB/s)(10.2MiB/1053msec) 00:46:26.045 slat (nsec): min=934, max=35081k, avg=153525.61, stdev=1300221.15 00:46:26.045 clat (usec): min=2092, max=71148, avg=17994.21, stdev=17226.67 00:46:26.045 lat (usec): min=2112, max=95413, avg=18147.74, stdev=17351.51 00:46:26.045 clat percentiles (usec): 00:46:26.045 | 1.00th=[ 3425], 5.00th=[ 4686], 10.00th=[ 6456], 20.00th=[ 7242], 00:46:26.045 | 30.00th=[ 7832], 40.00th=[ 8291], 50.00th=[ 9765], 60.00th=[11863], 00:46:26.045 | 70.00th=[14615], 80.00th=[32113], 90.00th=[54264], 95.00th=[58983], 00:46:26.045 | 99.00th=[62653], 99.50th=[62653], 99.90th=[67634], 99.95th=[67634], 00:46:26.045 | 99.99th=[70779] 00:46:26.045 write: IOPS=2917, BW=11.4MiB/s (11.9MB/s)(12.0MiB/1053msec); 0 zone resets 00:46:26.045 slat (nsec): min=1604, max=30391k, avg=194196.48, stdev=1442799.96 00:46:26.045 clat (usec): min=1984, max=102357, avg=28228.98, stdev=26786.00 00:46:26.045 lat (usec): min=1994, max=102365, avg=28423.18, stdev=26958.87 00:46:26.045 clat percentiles (msec): 00:46:26.045 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 9], 00:46:26.045 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 14], 60.00th=[ 22], 00:46:26.045 | 70.00th=[ 36], 80.00th=[ 52], 90.00th=[ 75], 95.00th=[ 91], 00:46:26.045 | 99.00th=[ 96], 99.50th=[ 97], 99.90th=[ 103], 99.95th=[ 103], 00:46:26.045 | 99.99th=[ 103] 00:46:26.045 bw ( KiB/s): min= 9920, max=13944, per=15.81%, avg=11932.00, stdev=2845.40, samples=2 00:46:26.045 iops : min= 2480, max= 3486, avg=2983.00, stdev=711.35, samples=2 00:46:26.045 lat (msec) : 2=0.05%, 4=3.63%, 10=40.12%, 20=20.19%, 50=19.66% 00:46:26.045 lat (msec) : 100=16.10%, 250=0.25% 00:46:26.045 cpu : usr=1.62%, sys=2.57%, ctx=337, majf=0, minf=1 00:46:26.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:46:26.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:26.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:26.045 issued rwts: total=2599,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:26.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:26.045 job1: (groupid=0, jobs=1): err= 0: pid=3395268: Mon Oct 7 14:56:49 2024 00:46:26.045 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:46:26.045 slat (nsec): min=960, max=17188k, avg=87890.57, stdev=663283.77 00:46:26.045 clat (usec): min=1502, max=50929, avg=11923.30, stdev=7290.95 00:46:26.045 lat (usec): min=1509, max=50931, avg=12011.19, stdev=7324.05 00:46:26.045 clat percentiles (usec): 00:46:26.045 | 1.00th=[ 3294], 5.00th=[ 4359], 10.00th=[ 5800], 20.00th=[ 6849], 00:46:26.045 | 30.00th=[ 7439], 40.00th=[ 8717], 50.00th=[ 9765], 60.00th=[11600], 00:46:26.045 | 70.00th=[14091], 80.00th=[15795], 90.00th=[18744], 95.00th=[27657], 00:46:26.045 | 99.00th=[40109], 99.50th=[47449], 99.90th=[49546], 99.95th=[51119], 00:46:26.045 | 99.99th=[51119] 00:46:26.046 write: IOPS=5627, BW=22.0MiB/s (23.1MB/s)(22.0MiB/1002msec); 0 zone resets 00:46:26.046 slat (nsec): min=1649, max=14114k, avg=75255.40, stdev=590338.75 00:46:26.046 clat (usec): min=740, max=57685, avg=10635.51, stdev=6559.29 00:46:26.046 lat (usec): min=748, max=57693, avg=10710.77, stdev=6572.62 00:46:26.046 clat percentiles (usec): 00:46:26.046 | 1.00th=[ 2540], 5.00th=[ 4080], 10.00th=[ 4817], 20.00th=[ 6063], 00:46:26.046 | 30.00th=[ 7046], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[10290], 00:46:26.046 | 70.00th=[11994], 80.00th=[13042], 90.00th=[18220], 95.00th=[23462], 00:46:26.046 | 99.00th=[39060], 99.50th=[45876], 99.90th=[56361], 99.95th=[57410], 00:46:26.046 | 99.99th=[57934] 00:46:26.046 bw ( KiB/s): min=20240, max=24816, per=29.85%, avg=22528.00, stdev=3235.72, samples=2 00:46:26.046 iops : min= 5060, max= 6204, avg=5632.00, stdev=808.93, samples=2 00:46:26.046 lat (usec) : 750=0.02%, 1000=0.03% 00:46:26.046 lat (msec) : 2=0.36%, 4=2.79%, 10=53.48%, 20=35.25%, 50=7.94% 00:46:26.046 lat (msec) : 100=0.13% 00:46:26.046 cpu : usr=3.40%, sys=6.09%, ctx=427, majf=0, minf=1 00:46:26.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:46:26.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:26.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:26.046 issued rwts: total=5632,5639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:26.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:26.046 job2: (groupid=0, jobs=1): err= 0: pid=3395270: Mon Oct 7 14:56:49 2024 00:46:26.046 read: IOPS=6071, BW=23.7MiB/s (24.9MB/s)(24.0MiB/1012msec) 00:46:26.046 slat (nsec): min=961, max=12967k, avg=78817.39, stdev=644590.87 00:46:26.046 clat (usec): min=2473, max=47473, avg=10740.22, stdev=5479.76 00:46:26.046 lat (usec): min=2478, max=47499, avg=10819.03, stdev=5525.70 00:46:26.046 clat percentiles (usec): 00:46:26.046 | 1.00th=[ 4359], 5.00th=[ 5342], 10.00th=[ 5800], 20.00th=[ 6652], 00:46:26.046 | 30.00th=[ 7242], 40.00th=[ 8029], 50.00th=[ 8979], 60.00th=[10290], 00:46:26.046 | 70.00th=[11863], 80.00th=[14484], 90.00th=[17171], 95.00th=[22676], 00:46:26.046 | 99.00th=[31327], 99.50th=[35914], 99.90th=[37487], 99.95th=[37487], 00:46:26.046 | 99.99th=[47449] 00:46:26.046 write: IOPS=6442, BW=25.2MiB/s (26.4MB/s)(25.5MiB/1012msec); 0 zone resets 00:46:26.046 slat (nsec): min=1594, max=14203k, avg=71664.79, stdev=525359.40 00:46:26.046 clat (usec): min=566, max=31568, avg=9555.19, stdev=4298.54 00:46:26.046 lat (usec): min=570, max=31574, avg=9626.86, stdev=4327.38 00:46:26.046 clat percentiles (usec): 00:46:26.046 | 1.00th=[ 3261], 5.00th=[ 4424], 10.00th=[ 5276], 20.00th=[ 6259], 00:46:26.046 | 30.00th=[ 7111], 40.00th=[ 7767], 50.00th=[ 8356], 60.00th=[ 9765], 00:46:26.046 | 70.00th=[11076], 80.00th=[12780], 90.00th=[14353], 95.00th=[16909], 00:46:26.046 | 99.00th=[27919], 99.50th=[29492], 99.90th=[31589], 99.95th=[31589], 00:46:26.046 | 99.99th=[31589] 00:46:26.046 bw ( KiB/s): min=25456, max=25680, per=33.88%, avg=25568.00, stdev=158.39, samples=2 00:46:26.046 iops : min= 6364, max= 6420, avg=6392.00, stdev=39.60, samples=2 00:46:26.046 lat (usec) : 750=0.02% 00:46:26.046 lat (msec) : 2=0.42%, 4=1.03%, 10=58.63%, 20=35.23%, 50=4.66% 00:46:26.046 cpu : usr=4.15%, sys=6.73%, ctx=459, majf=0, minf=1 00:46:26.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:46:26.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:26.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:26.046 issued rwts: total=6144,6520,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:26.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:26.046 job3: (groupid=0, jobs=1): err= 0: pid=3395271: Mon Oct 7 14:56:49 2024 00:46:26.046 read: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec) 00:46:26.046 slat (nsec): min=966, max=21345k, avg=108481.35, stdev=936878.80 00:46:26.046 clat (usec): min=1668, max=36235, avg=14872.91, stdev=6112.19 00:46:26.046 lat (usec): min=1694, max=36247, avg=14981.39, stdev=6171.69 00:46:26.046 clat percentiles (usec): 00:46:26.046 | 1.00th=[ 2114], 5.00th=[ 6587], 10.00th=[ 8586], 20.00th=[10683], 00:46:26.046 | 30.00th=[11076], 40.00th=[13566], 50.00th=[14353], 60.00th=[14877], 00:46:26.046 | 70.00th=[16057], 80.00th=[20579], 90.00th=[23462], 95.00th=[25297], 00:46:26.046 | 99.00th=[30016], 99.50th=[32637], 99.90th=[34341], 99.95th=[34341], 00:46:26.046 | 99.99th=[36439] 00:46:26.046 write: IOPS=4574, BW=17.9MiB/s (18.7MB/s)(18.1MiB/1013msec); 0 zone resets 00:46:26.046 slat (nsec): min=1628, max=16147k, avg=83289.91, stdev=748906.59 00:46:26.046 clat (usec): min=1135, max=58499, avg=12964.71, stdev=8257.88 00:46:26.046 lat (usec): min=1145, max=58507, avg=13048.00, stdev=8321.30 00:46:26.046 clat percentiles (usec): 00:46:26.046 | 1.00th=[ 2933], 5.00th=[ 5473], 10.00th=[ 6456], 20.00th=[ 8160], 00:46:26.046 | 30.00th=[ 9372], 40.00th=[10290], 50.00th=[10814], 60.00th=[11600], 00:46:26.046 | 70.00th=[13566], 80.00th=[17433], 90.00th=[19006], 95.00th=[25822], 00:46:26.046 | 99.00th=[56361], 99.50th=[56886], 99.90th=[58459], 99.95th=[58459], 00:46:26.046 | 99.99th=[58459] 00:46:26.046 bw ( KiB/s): min=16384, max=20480, per=24.43%, avg=18432.00, stdev=2896.31, samples=2 00:46:26.046 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:46:26.046 lat (msec) : 2=0.66%, 4=2.25%, 10=22.26%, 20=61.03%, 50=12.87% 00:46:26.046 lat (msec) : 100=0.94% 00:46:26.046 cpu : usr=3.36%, sys=5.53%, ctx=267, majf=0, minf=2 00:46:26.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:46:26.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:26.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:46:26.046 issued rwts: total=4608,4634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:26.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:46:26.046 00:46:26.046 Run status group 0 (all jobs): 00:46:26.046 READ: bw=70.4MiB/s (73.8MB/s), 9873KiB/s-23.7MiB/s (10.1MB/s-24.9MB/s), io=74.2MiB (77.8MB), run=1002-1053msec 00:46:26.046 WRITE: bw=73.7MiB/s (77.3MB/s), 11.4MiB/s-25.2MiB/s (11.9MB/s-26.4MB/s), io=77.6MiB (81.4MB), run=1002-1053msec 00:46:26.046 00:46:26.046 Disk stats (read/write): 00:46:26.046 nvme0n1: ios=2091/2218, merge=0/0, ticks=22271/29531, in_queue=51802, util=82.36% 00:46:26.046 nvme0n2: ios=4508/4608, merge=0/0, ticks=44555/42451, in_queue=87006, util=88.69% 00:46:26.046 nvme0n3: ios=5175/5617, merge=0/0, ticks=48029/47595, in_queue=95624, util=95.14% 00:46:26.046 nvme0n4: ios=4019/4096, merge=0/0, ticks=53479/46520, in_queue=99999, util=97.22% 00:46:26.046 14:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:46:26.046 14:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3395602 00:46:26.046 14:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:46:26.046 14:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:46:26.046 [global] 00:46:26.046 thread=1 00:46:26.046 invalidate=1 00:46:26.046 rw=read 00:46:26.046 time_based=1 00:46:26.046 runtime=10 00:46:26.046 ioengine=libaio 00:46:26.046 direct=1 00:46:26.046 bs=4096 00:46:26.046 iodepth=1 00:46:26.046 norandommap=1 00:46:26.046 numjobs=1 00:46:26.046 00:46:26.046 [job0] 00:46:26.046 filename=/dev/nvme0n1 00:46:26.046 [job1] 00:46:26.046 filename=/dev/nvme0n2 00:46:26.046 [job2] 00:46:26.046 filename=/dev/nvme0n3 00:46:26.046 [job3] 00:46:26.046 filename=/dev/nvme0n4 00:46:26.046 Could not set queue depth (nvme0n1) 00:46:26.046 Could not set queue depth (nvme0n2) 00:46:26.046 Could not set queue depth (nvme0n3) 00:46:26.046 Could not set queue depth (nvme0n4) 00:46:26.322 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:26.322 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:26.322 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:26.322 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:26.322 fio-3.35 00:46:26.322 Starting 4 threads 00:46:28.965 14:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:46:29.226 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10252288, buflen=4096 00:46:29.226 fio: pid=3395798, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:46:29.226 14:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:46:29.226 14:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:29.226 14:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:46:29.226 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1036288, buflen=4096 00:46:29.226 fio: pid=3395797, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:46:29.486 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10493952, buflen=4096 00:46:29.486 fio: pid=3395793, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:46:29.486 14:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:29.486 14:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:46:29.746 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12709888, buflen=4096 00:46:29.746 fio: pid=3395794, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:46:29.746 00:46:29.746 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3395793: Mon Oct 7 14:56:53 2024 00:46:29.746 read: IOPS=875, BW=3502KiB/s (3586kB/s)(10.0MiB/2926msec) 00:46:29.746 slat (usec): min=4, max=19463, avg=38.70, stdev=487.11 00:46:29.746 clat (usec): min=308, max=40791, avg=1088.25, stdev=983.69 00:46:29.746 lat (usec): min=314, max=40798, avg=1126.95, stdev=1095.71 00:46:29.746 clat percentiles (usec): 00:46:29.746 | 1.00th=[ 717], 5.00th=[ 873], 10.00th=[ 938], 20.00th=[ 996], 00:46:29.746 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:46:29.746 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1221], 00:46:29.746 | 99.00th=[ 1287], 99.50th=[ 1352], 99.90th=[ 4424], 99.95th=[30540], 00:46:29.746 | 99.99th=[40633] 00:46:29.746 bw ( KiB/s): min= 3488, max= 3688, per=33.44%, avg=3576.00, stdev=84.10, samples=5 00:46:29.746 iops : min= 872, max= 922, avg=894.00, stdev=21.02, samples=5 00:46:29.746 lat (usec) : 500=0.04%, 750=1.25%, 1000=19.27% 00:46:29.746 lat (msec) : 2=79.28%, 10=0.04%, 50=0.08% 00:46:29.746 cpu : usr=1.06%, sys=2.50%, ctx=2566, majf=0, minf=1 00:46:29.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:29.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:29.746 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:29.746 issued rwts: total=2563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:29.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:29.746 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3395794: Mon Oct 7 14:56:53 2024 00:46:29.746 read: IOPS=985, BW=3940KiB/s (4035kB/s)(12.1MiB/3150msec) 00:46:29.746 slat (usec): min=3, max=22958, avg=50.37, stdev=596.54 00:46:29.746 clat (usec): min=323, max=2408, avg=949.83, stdev=141.99 00:46:29.746 lat (usec): min=350, max=23892, avg=1000.21, stdev=611.93 00:46:29.746 clat percentiles (usec): 00:46:29.746 | 1.00th=[ 603], 5.00th=[ 701], 10.00th=[ 758], 20.00th=[ 824], 00:46:29.746 | 30.00th=[ 889], 40.00th=[ 947], 50.00th=[ 979], 60.00th=[ 1012], 00:46:29.746 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:46:29.746 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 2114], 99.95th=[ 2343], 00:46:29.746 | 99.99th=[ 2409] 00:46:29.746 bw ( KiB/s): min= 3688, max= 4246, per=37.15%, avg=3973.00, stdev=207.70, samples=6 00:46:29.746 iops : min= 922, max= 1061, avg=993.17, stdev=51.79, samples=6 00:46:29.746 lat (usec) : 500=0.26%, 750=8.34%, 1000=47.97% 00:46:29.746 lat (msec) : 2=43.27%, 4=0.13% 00:46:29.746 cpu : usr=1.81%, sys=3.78%, ctx=3112, majf=0, minf=2 00:46:29.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:29.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:29.746 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:29.746 issued rwts: total=3104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:29.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:29.746 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3395797: Mon Oct 7 14:56:53 2024 00:46:29.746 read: IOPS=92, BW=367KiB/s (376kB/s)(1012KiB/2758msec) 00:46:29.746 slat (usec): min=5, max=23631, avg=152.33, stdev=1582.64 00:46:29.746 clat (usec): min=269, max=43947, avg=10654.58, stdev=17643.33 00:46:29.746 lat (usec): min=275, max=43980, avg=10807.40, stdev=17645.33 00:46:29.746 clat percentiles (usec): 00:46:29.746 | 1.00th=[ 433], 5.00th=[ 486], 10.00th=[ 523], 20.00th=[ 635], 00:46:29.746 | 30.00th=[ 693], 40.00th=[ 742], 50.00th=[ 807], 60.00th=[ 857], 00:46:29.746 | 70.00th=[ 947], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:46:29.746 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:46:29.746 | 99.99th=[43779] 00:46:29.746 bw ( KiB/s): min= 96, max= 352, per=1.37%, avg=147.20, stdev=114.49, samples=5 00:46:29.746 iops : min= 24, max= 88, avg=36.80, stdev=28.62, samples=5 00:46:29.746 lat (usec) : 500=7.09%, 750=33.46%, 1000=32.68% 00:46:29.746 lat (msec) : 2=2.36%, 50=24.02% 00:46:29.746 cpu : usr=0.22%, sys=0.22%, ctx=259, majf=0, minf=2 00:46:29.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:29.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:29.746 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:29.746 issued rwts: total=254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:29.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:29.746 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3395798: Mon Oct 7 14:56:53 2024 00:46:29.746 read: IOPS=986, BW=3946KiB/s (4041kB/s)(9.78MiB/2537msec) 00:46:29.746 slat (nsec): min=4829, max=59359, avg=24330.22, stdev=5866.60 00:46:29.746 clat (usec): min=205, max=41854, avg=979.69, stdev=2900.01 00:46:29.746 lat (usec): min=212, max=41880, avg=1004.02, stdev=2900.17 00:46:29.746 clat percentiles (usec): 00:46:29.746 | 1.00th=[ 306], 5.00th=[ 445], 10.00th=[ 523], 20.00th=[ 619], 00:46:29.746 | 30.00th=[ 742], 40.00th=[ 783], 50.00th=[ 816], 60.00th=[ 857], 00:46:29.746 | 70.00th=[ 881], 80.00th=[ 898], 90.00th=[ 930], 95.00th=[ 955], 00:46:29.746 | 99.00th=[ 1029], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:46:29.746 | 99.99th=[41681] 00:46:29.746 bw ( KiB/s): min= 104, max= 5760, per=37.01%, avg=3958.40, stdev=2209.38, samples=5 00:46:29.746 iops : min= 26, max= 1440, avg=989.60, stdev=552.34, samples=5 00:46:29.746 lat (usec) : 250=0.20%, 500=6.75%, 750=24.68%, 1000=66.73% 00:46:29.746 lat (msec) : 2=1.08%, 50=0.52% 00:46:29.746 cpu : usr=0.99%, sys=2.84%, ctx=2504, majf=0, minf=2 00:46:29.746 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:46:29.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:29.746 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:29.746 issued rwts: total=2504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:29.746 latency : target=0, window=0, percentile=100.00%, depth=1 00:46:29.746 00:46:29.746 Run status group 0 (all jobs): 00:46:29.746 READ: bw=10.4MiB/s (10.9MB/s), 367KiB/s-3946KiB/s (376kB/s-4041kB/s), io=32.9MiB (34.5MB), run=2537-3150msec 00:46:29.746 00:46:29.746 Disk stats (read/write): 00:46:29.746 nvme0n1: ios=2460/0, merge=0/0, ticks=2589/0, in_queue=2589, util=91.99% 00:46:29.746 nvme0n2: ios=3017/0, merge=0/0, ticks=2578/0, in_queue=2578, util=92.29% 00:46:29.746 nvme0n3: ios=135/0, merge=0/0, ticks=3074/0, in_queue=3074, util=100.00% 00:46:29.747 nvme0n4: ios=2225/0, merge=0/0, ticks=2159/0, in_queue=2159, util=95.98% 00:46:29.747 14:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:29.747 14:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:46:30.007 14:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:30.007 14:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:46:30.268 14:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:30.268 14:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:46:30.528 14:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:30.528 14:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:46:30.528 14:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:46:30.528 14:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:46:30.789 14:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:46:30.789 14:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3395602 00:46:30.789 14:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:46:30.789 14:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:46:31.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:46:31.731 nvmf hotplug test: fio failed as expected 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:31.731 rmmod nvme_tcp 00:46:31.731 rmmod nvme_fabrics 00:46:31.731 rmmod nvme_keyring 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 3392241 ']' 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 3392241 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3392241 ']' 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3392241 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:31.731 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3392241 00:46:31.991 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:46:31.991 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:46:31.991 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3392241' 00:46:31.991 killing process with pid 3392241 00:46:31.991 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3392241 00:46:31.991 14:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3392241 00:46:32.932 14:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:46:32.932 14:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:46:32.932 14:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:46:32.932 14:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:46:32.932 14:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:46:32.932 14:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:46:32.932 14:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:46:32.932 14:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:32.932 14:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:32.932 14:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:32.932 14:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:32.932 14:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:34.842 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:34.842 00:46:34.842 real 0m29.917s 00:46:34.842 user 2m8.803s 00:46:34.842 sys 0m12.872s 00:46:34.842 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:34.842 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:46:34.842 ************************************ 00:46:34.842 END TEST nvmf_fio_target 00:46:34.842 ************************************ 00:46:34.842 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:46:34.842 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:46:34.842 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:34.842 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:46:34.842 ************************************ 00:46:34.842 START TEST nvmf_bdevio 00:46:34.842 ************************************ 00:46:34.842 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:46:35.104 * Looking for test storage... 00:46:35.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:46:35.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:35.104 --rc genhtml_branch_coverage=1 00:46:35.104 --rc genhtml_function_coverage=1 00:46:35.104 --rc genhtml_legend=1 00:46:35.104 --rc geninfo_all_blocks=1 00:46:35.104 --rc geninfo_unexecuted_blocks=1 00:46:35.104 00:46:35.104 ' 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:46:35.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:35.104 --rc genhtml_branch_coverage=1 00:46:35.104 --rc genhtml_function_coverage=1 00:46:35.104 --rc genhtml_legend=1 00:46:35.104 --rc geninfo_all_blocks=1 00:46:35.104 --rc geninfo_unexecuted_blocks=1 00:46:35.104 00:46:35.104 ' 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:46:35.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:35.104 --rc genhtml_branch_coverage=1 00:46:35.104 --rc genhtml_function_coverage=1 00:46:35.104 --rc genhtml_legend=1 00:46:35.104 --rc geninfo_all_blocks=1 00:46:35.104 --rc geninfo_unexecuted_blocks=1 00:46:35.104 00:46:35.104 ' 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:46:35.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:35.104 --rc genhtml_branch_coverage=1 00:46:35.104 --rc genhtml_function_coverage=1 00:46:35.104 --rc genhtml_legend=1 00:46:35.104 --rc geninfo_all_blocks=1 00:46:35.104 --rc geninfo_unexecuted_blocks=1 00:46:35.104 00:46:35.104 ' 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:35.104 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:46:35.105 14:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:46:43.247 Found 0000:31:00.0 (0x8086 - 0x159b) 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:43.247 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:46:43.247 Found 0000:31:00.1 (0x8086 - 0x159b) 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:46:43.248 Found net devices under 0000:31:00.0: cvl_0_0 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:46:43.248 Found net devices under 0000:31:00.1: cvl_0_1 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:43.248 14:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:43.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:43.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:46:43.248 00:46:43.248 --- 10.0.0.2 ping statistics --- 00:46:43.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:43.248 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:43.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:43.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:46:43.248 00:46:43.248 --- 10.0.0.1 ping statistics --- 00:46:43.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:43.248 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=3401340 00:46:43.248 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 3401340 00:46:43.249 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:46:43.249 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3401340 ']' 00:46:43.249 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:43.249 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:43.249 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:43.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:43.249 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:43.249 14:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:43.249 [2024-10-07 14:57:06.322265] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:46:43.249 [2024-10-07 14:57:06.324689] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:46:43.249 [2024-10-07 14:57:06.324771] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:43.249 [2024-10-07 14:57:06.473087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:43.249 [2024-10-07 14:57:06.658084] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:43.249 [2024-10-07 14:57:06.658130] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:43.249 [2024-10-07 14:57:06.658144] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:43.249 [2024-10-07 14:57:06.658154] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:43.249 [2024-10-07 14:57:06.658165] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:43.249 [2024-10-07 14:57:06.660413] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:46:43.249 [2024-10-07 14:57:06.660536] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:46:43.249 [2024-10-07 14:57:06.660626] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:46:43.249 [2024-10-07 14:57:06.660653] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:46:43.249 [2024-10-07 14:57:06.906788] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:46:43.249 [2024-10-07 14:57:06.907536] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:46:43.249 [2024-10-07 14:57:06.908146] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:46:43.249 [2024-10-07 14:57:06.908338] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:46:43.249 [2024-10-07 14:57:06.908450] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:46:43.509 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:43.509 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:46:43.509 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:46:43.509 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:43.509 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:43.509 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:43.509 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:43.509 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:43.509 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:43.509 [2024-10-07 14:57:07.137849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:43.509 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:43.509 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:46:43.509 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:43.509 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:43.769 Malloc0 00:46:43.769 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:43.769 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:46:43.769 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:43.769 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:43.769 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:43.769 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:46:43.769 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:43.769 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:43.769 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:43.769 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:43.769 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:43.769 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:43.769 [2024-10-07 14:57:07.269713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:43.769 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:43.769 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:46:43.769 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:46:43.769 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:46:43.769 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:46:43.770 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:46:43.770 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:46:43.770 { 00:46:43.770 "params": { 00:46:43.770 "name": "Nvme$subsystem", 00:46:43.770 "trtype": "$TEST_TRANSPORT", 00:46:43.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:46:43.770 "adrfam": "ipv4", 00:46:43.770 "trsvcid": "$NVMF_PORT", 00:46:43.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:46:43.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:46:43.770 "hdgst": ${hdgst:-false}, 00:46:43.770 "ddgst": ${ddgst:-false} 00:46:43.770 }, 00:46:43.770 "method": "bdev_nvme_attach_controller" 00:46:43.770 } 00:46:43.770 EOF 00:46:43.770 )") 00:46:43.770 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:46:43.770 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:46:43.770 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:46:43.770 14:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:46:43.770 "params": { 00:46:43.770 "name": "Nvme1", 00:46:43.770 "trtype": "tcp", 00:46:43.770 "traddr": "10.0.0.2", 00:46:43.770 "adrfam": "ipv4", 00:46:43.770 "trsvcid": "4420", 00:46:43.770 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:43.770 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:43.770 "hdgst": false, 00:46:43.770 "ddgst": false 00:46:43.770 }, 00:46:43.770 "method": "bdev_nvme_attach_controller" 00:46:43.770 }' 00:46:43.770 [2024-10-07 14:57:07.353690] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:46:43.770 [2024-10-07 14:57:07.353783] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3401427 ] 00:46:43.770 [2024-10-07 14:57:07.468339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:46:44.030 [2024-10-07 14:57:07.647982] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:46:44.030 [2024-10-07 14:57:07.647993] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:46:44.030 [2024-10-07 14:57:07.647997] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:46:44.600 I/O targets: 00:46:44.600 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:46:44.600 00:46:44.600 00:46:44.600 CUnit - A unit testing framework for C - Version 2.1-3 00:46:44.600 http://cunit.sourceforge.net/ 00:46:44.600 00:46:44.600 00:46:44.600 Suite: bdevio tests on: Nvme1n1 00:46:44.600 Test: blockdev write read block ...passed 00:46:44.600 Test: blockdev write zeroes read block ...passed 00:46:44.600 Test: blockdev write zeroes read no split ...passed 00:46:44.600 Test: blockdev write zeroes read split ...passed 00:46:44.600 Test: blockdev write zeroes read split partial ...passed 00:46:44.600 Test: blockdev reset ...[2024-10-07 14:57:08.235525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:46:44.600 [2024-10-07 14:57:08.235629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039ec00 (9): Bad file descriptor 00:46:44.600 [2024-10-07 14:57:08.244250] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:46:44.600 passed 00:46:44.600 Test: blockdev write read 8 blocks ...passed 00:46:44.600 Test: blockdev write read size > 128k ...passed 00:46:44.600 Test: blockdev write read invalid size ...passed 00:46:44.860 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:46:44.860 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:46:44.860 Test: blockdev write read max offset ...passed 00:46:44.860 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:46:44.860 Test: blockdev writev readv 8 blocks ...passed 00:46:44.860 Test: blockdev writev readv 30 x 1block ...passed 00:46:44.860 Test: blockdev writev readv block ...passed 00:46:44.860 Test: blockdev writev readv size > 128k ...passed 00:46:44.860 Test: blockdev writev readv size > 128k in two iovs ...passed 00:46:44.860 Test: blockdev comparev and writev ...[2024-10-07 14:57:08.516164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:44.860 [2024-10-07 14:57:08.516196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:46:44.860 [2024-10-07 14:57:08.516211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:44.860 [2024-10-07 14:57:08.516220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:46:44.860 [2024-10-07 14:57:08.516892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:44.860 [2024-10-07 14:57:08.516906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:46:44.860 [2024-10-07 14:57:08.516919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:44.860 [2024-10-07 14:57:08.516929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:46:44.860 [2024-10-07 14:57:08.517577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:44.860 [2024-10-07 14:57:08.517592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:46:44.860 [2024-10-07 14:57:08.517608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:44.860 [2024-10-07 14:57:08.517616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:46:44.860 [2024-10-07 14:57:08.518227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:44.860 [2024-10-07 14:57:08.518241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:46:44.860 [2024-10-07 14:57:08.518254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:46:44.860 [2024-10-07 14:57:08.518261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:46:44.860 passed 00:46:45.120 Test: blockdev nvme passthru rw ...passed 00:46:45.120 Test: blockdev nvme passthru vendor specific ...[2024-10-07 14:57:08.602934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:46:45.120 [2024-10-07 14:57:08.602954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:46:45.120 [2024-10-07 14:57:08.603316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:46:45.120 [2024-10-07 14:57:08.603327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:46:45.120 [2024-10-07 14:57:08.603735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:46:45.120 [2024-10-07 14:57:08.603746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:46:45.120 [2024-10-07 14:57:08.604147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:46:45.120 [2024-10-07 14:57:08.604158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:46:45.120 passed 00:46:45.120 Test: blockdev nvme admin passthru ...passed 00:46:45.120 Test: blockdev copy ...passed 00:46:45.120 00:46:45.120 Run Summary: Type Total Ran Passed Failed Inactive 00:46:45.120 suites 1 1 n/a 0 0 00:46:45.120 tests 23 23 23 0 0 00:46:45.120 asserts 152 152 152 0 n/a 00:46:45.120 00:46:45.120 Elapsed time = 1.296 seconds 00:46:45.691 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:45.691 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:45.691 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:45.691 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:45.691 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:46:45.691 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:46:45.691 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:46:45.691 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:46:45.691 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:45.691 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:46:45.691 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:45.691 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:45.691 rmmod nvme_tcp 00:46:45.950 rmmod nvme_fabrics 00:46:45.950 rmmod nvme_keyring 00:46:45.950 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:45.951 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:46:45.951 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:46:45.951 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 3401340 ']' 00:46:45.951 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 3401340 00:46:45.951 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3401340 ']' 00:46:45.951 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3401340 00:46:45.951 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:46:45.951 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:46:45.951 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3401340 00:46:45.951 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:46:45.951 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:46:45.951 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3401340' 00:46:45.951 killing process with pid 3401340 00:46:45.951 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3401340 00:46:45.951 14:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3401340 00:46:46.890 14:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:46:46.890 14:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:46:46.890 14:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:46:46.890 14:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:46:46.890 14:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:46:46.890 14:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:46:46.890 14:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:46:46.890 14:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:46.890 14:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:46.890 14:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:46.890 14:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:46:46.890 14:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:49.432 14:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:49.432 00:46:49.432 real 0m14.127s 00:46:49.432 user 0m16.568s 00:46:49.432 sys 0m6.832s 00:46:49.432 14:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:49.432 14:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:46:49.432 ************************************ 00:46:49.432 END TEST nvmf_bdevio 00:46:49.432 ************************************ 00:46:49.432 14:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:46:49.432 00:46:49.432 real 5m15.611s 00:46:49.432 user 10m45.382s 00:46:49.432 sys 2m7.480s 00:46:49.432 14:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:46:49.432 14:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:46:49.432 ************************************ 00:46:49.432 END TEST nvmf_target_core_interrupt_mode 00:46:49.432 ************************************ 00:46:49.432 14:57:12 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:46:49.432 14:57:12 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:46:49.432 14:57:12 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:46:49.432 14:57:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:46:49.432 ************************************ 00:46:49.432 START TEST nvmf_interrupt 00:46:49.432 ************************************ 00:46:49.432 14:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:46:49.432 * Looking for test storage... 00:46:49.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:49.432 14:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:46:49.432 14:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:46:49.432 14:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:46:49.432 14:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:46:49.432 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:49.432 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:49.432 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:49.432 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:46:49.432 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:46:49.432 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:46:49.432 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:46:49.432 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:46:49.432 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:46:49.432 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:46:49.432 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:49.432 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:46:49.432 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:46:49.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:49.433 --rc genhtml_branch_coverage=1 00:46:49.433 --rc genhtml_function_coverage=1 00:46:49.433 --rc genhtml_legend=1 00:46:49.433 --rc geninfo_all_blocks=1 00:46:49.433 --rc geninfo_unexecuted_blocks=1 00:46:49.433 00:46:49.433 ' 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:46:49.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:49.433 --rc genhtml_branch_coverage=1 00:46:49.433 --rc genhtml_function_coverage=1 00:46:49.433 --rc genhtml_legend=1 00:46:49.433 --rc geninfo_all_blocks=1 00:46:49.433 --rc geninfo_unexecuted_blocks=1 00:46:49.433 00:46:49.433 ' 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:46:49.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:49.433 --rc genhtml_branch_coverage=1 00:46:49.433 --rc genhtml_function_coverage=1 00:46:49.433 --rc genhtml_legend=1 00:46:49.433 --rc geninfo_all_blocks=1 00:46:49.433 --rc geninfo_unexecuted_blocks=1 00:46:49.433 00:46:49.433 ' 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:46:49.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:49.433 --rc genhtml_branch_coverage=1 00:46:49.433 --rc genhtml_function_coverage=1 00:46:49.433 --rc genhtml_legend=1 00:46:49.433 --rc geninfo_all_blocks=1 00:46:49.433 --rc geninfo_unexecuted_blocks=1 00:46:49.433 00:46:49.433 ' 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:46:49.433 14:57:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:46:57.566 Found 0000:31:00.0 (0x8086 - 0x159b) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:46:57.566 Found 0000:31:00.1 (0x8086 - 0x159b) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:46:57.566 Found net devices under 0000:31:00.0: cvl_0_0 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:46:57.566 Found net devices under 0000:31:00.1: cvl_0_1 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:57.566 14:57:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:57.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:57.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:46:57.566 00:46:57.566 --- 10.0.0.2 ping statistics --- 00:46:57.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:57.566 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:57.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:57.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:46:57.566 00:46:57.566 --- 10.0.0.1 ping statistics --- 00:46:57.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:57.566 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=3406731 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 3406731 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 3406731 ']' 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:57.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:46:57.566 14:57:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:46:57.566 [2024-10-07 14:57:20.239602] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:46:57.566 [2024-10-07 14:57:20.241498] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:46:57.566 [2024-10-07 14:57:20.241574] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:57.566 [2024-10-07 14:57:20.350672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:46:57.566 [2024-10-07 14:57:20.530927] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:57.566 [2024-10-07 14:57:20.530974] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:57.566 [2024-10-07 14:57:20.530987] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:57.566 [2024-10-07 14:57:20.530997] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:57.566 [2024-10-07 14:57:20.531015] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:57.566 [2024-10-07 14:57:20.532511] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:46:57.566 [2024-10-07 14:57:20.532538] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:46:57.566 [2024-10-07 14:57:20.780387] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:46:57.566 [2024-10-07 14:57:20.780572] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:46:57.566 [2024-10-07 14:57:20.780691] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:46:57.566 5000+0 records in 00:46:57.566 5000+0 records out 00:46:57.566 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0178729 s, 573 MB/s 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:46:57.566 AIO0 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:46:57.566 [2024-10-07 14:57:21.141246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:46:57.566 [2024-10-07 14:57:21.186242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3406731 0 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3406731 0 idle 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3406731 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3406731 -w 256 00:46:57.566 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3406731 root 20 0 20.1t 213120 103680 S 0.0 0.2 0:00.61 reactor_0' 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3406731 root 20 0 20.1t 213120 103680 S 0.0 0.2 0:00.61 reactor_0 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3406731 1 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3406731 1 idle 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3406731 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3406731 -w 256 00:46:57.825 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:46:58.083 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3406778 root 20 0 20.1t 213120 103680 S 0.0 0.2 0:00.00 reactor_1' 00:46:58.083 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3406778 root 20 0 20.1t 213120 103680 S 0.0 0.2 0:00.00 reactor_1 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3406939 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3406731 0 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3406731 0 busy 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3406731 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3406731 -w 256 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3406731 root 20 0 20.1t 213120 103680 S 6.7 0.2 0:00.62 reactor_0' 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3406731 root 20 0 20.1t 213120 103680 S 6.7 0.2 0:00.62 reactor_0 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:46:58.084 14:57:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3406731 -w 256 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3406731 root 20 0 20.1t 226944 104832 R 99.9 0.2 0:02.86 reactor_0' 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3406731 root 20 0 20.1t 226944 104832 R 99.9 0.2 0:02.86 reactor_0 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3406731 1 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3406731 1 busy 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3406731 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3406731 -w 256 00:46:59.464 14:57:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:46:59.464 14:57:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3406778 root 20 0 20.1t 226944 104832 R 93.3 0.2 0:01.31 reactor_1' 00:46:59.464 14:57:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3406778 root 20 0 20.1t 226944 104832 R 93.3 0.2 0:01.31 reactor_1 00:46:59.464 14:57:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:46:59.464 14:57:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:46:59.464 14:57:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:46:59.464 14:57:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:46:59.464 14:57:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:46:59.464 14:57:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:46:59.464 14:57:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:46:59.464 14:57:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:46:59.464 14:57:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3406939 00:47:09.460 Initializing NVMe Controllers 00:47:09.460 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:47:09.460 Controller IO queue size 256, less than required. 00:47:09.460 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:47:09.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:47:09.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:47:09.460 Initialization complete. Launching workers. 00:47:09.460 ======================================================== 00:47:09.460 Latency(us) 00:47:09.460 Device Information : IOPS MiB/s Average min max 00:47:09.460 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18464.44 72.13 13869.95 4434.07 30866.78 00:47:09.460 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 15351.95 59.97 16679.47 9576.13 18733.93 00:47:09.460 ======================================================== 00:47:09.460 Total : 33816.39 132.10 15145.42 4434.07 30866.78 00:47:09.460 00:47:09.460 14:57:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:47:09.460 14:57:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3406731 0 00:47:09.460 14:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3406731 0 idle 00:47:09.460 14:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3406731 00:47:09.460 14:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:47:09.460 14:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:47:09.460 14:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:47:09.460 14:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:47:09.460 14:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:47:09.460 14:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:47:09.460 14:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:47:09.460 14:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:47:09.460 14:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:47:09.460 14:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3406731 -w 256 00:47:09.460 14:57:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3406731 root 20 0 20.1t 226944 104832 S 0.0 0.2 0:20.60 reactor_0' 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3406731 root 20 0 20.1t 226944 104832 S 0.0 0.2 0:20.60 reactor_0 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3406731 1 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3406731 1 idle 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3406731 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3406731 -w 256 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3406778 root 20 0 20.1t 226944 104832 S 0.0 0.2 0:10.01 reactor_1' 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3406778 root 20 0 20.1t 226944 104832 S 0.0 0.2 0:10.01 reactor_1 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:47:09.460 14:57:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:47:09.460 14:57:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:47:09.460 14:57:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:47:09.460 14:57:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:47:09.460 14:57:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:47:09.460 14:57:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3406731 0 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3406731 0 idle 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3406731 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3406731 -w 256 00:47:11.372 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3406731 root 20 0 20.1t 298368 130176 S 6.2 0.2 0:21.12 reactor_0' 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3406731 root 20 0 20.1t 298368 130176 S 6.2 0.2 0:21.12 reactor_0 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3406731 1 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3406731 1 idle 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3406731 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3406731 -w 256 00:47:11.633 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:47:11.893 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3406778 root 20 0 20.1t 298368 130176 S 0.0 0.2 0:10.34 reactor_1' 00:47:11.893 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3406778 root 20 0 20.1t 298368 130176 S 0.0 0.2 0:10.34 reactor_1 00:47:11.893 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:47:11.893 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:47:11.893 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:47:11.893 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:47:11.893 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:47:11.893 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:47:11.893 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:47:11.893 14:57:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:47:11.893 14:57:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:47:12.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:12.464 14:57:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:47:12.464 14:57:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:47:12.464 14:57:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:47:12.464 14:57:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:12.464 14:57:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:47:12.464 14:57:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:12.464 14:57:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:47:12.464 14:57:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:47:12.464 14:57:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:47:12.464 14:57:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:47:12.464 14:57:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:47:12.464 14:57:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:12.464 14:57:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:47:12.464 14:57:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:12.464 14:57:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:12.464 rmmod nvme_tcp 00:47:12.464 rmmod nvme_fabrics 00:47:12.464 rmmod nvme_keyring 00:47:12.464 14:57:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:12.464 14:57:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:47:12.464 14:57:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:47:12.464 14:57:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 3406731 ']' 00:47:12.464 14:57:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 3406731 00:47:12.464 14:57:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 3406731 ']' 00:47:12.464 14:57:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 3406731 00:47:12.464 14:57:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:47:12.464 14:57:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:12.464 14:57:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3406731 00:47:12.464 14:57:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:47:12.464 14:57:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:47:12.464 14:57:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3406731' 00:47:12.464 killing process with pid 3406731 00:47:12.464 14:57:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 3406731 00:47:12.464 14:57:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 3406731 00:47:13.404 14:57:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:47:13.404 14:57:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:47:13.404 14:57:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:47:13.404 14:57:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:47:13.404 14:57:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:47:13.404 14:57:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:47:13.404 14:57:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:47:13.404 14:57:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:13.404 14:57:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:13.404 14:57:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:13.404 14:57:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:13.404 14:57:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:15.946 14:57:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:15.946 00:47:15.946 real 0m26.334s 00:47:15.946 user 0m41.454s 00:47:15.946 sys 0m9.975s 00:47:15.946 14:57:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:15.946 14:57:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:47:15.946 ************************************ 00:47:15.946 END TEST nvmf_interrupt 00:47:15.946 ************************************ 00:47:15.946 00:47:15.946 real 38m37.029s 00:47:15.946 user 92m24.051s 00:47:15.946 sys 11m5.991s 00:47:15.946 14:57:39 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:15.946 14:57:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:15.946 ************************************ 00:47:15.946 END TEST nvmf_tcp 00:47:15.946 ************************************ 00:47:15.946 14:57:39 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:47:15.946 14:57:39 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:47:15.946 14:57:39 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:47:15.946 14:57:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:15.946 14:57:39 -- common/autotest_common.sh@10 -- # set +x 00:47:15.946 ************************************ 00:47:15.946 START TEST spdkcli_nvmf_tcp 00:47:15.946 ************************************ 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:47:15.946 * Looking for test storage... 00:47:15.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:15.946 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:47:15.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:15.947 --rc genhtml_branch_coverage=1 00:47:15.947 --rc genhtml_function_coverage=1 00:47:15.947 --rc genhtml_legend=1 00:47:15.947 --rc geninfo_all_blocks=1 00:47:15.947 --rc geninfo_unexecuted_blocks=1 00:47:15.947 00:47:15.947 ' 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:47:15.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:15.947 --rc genhtml_branch_coverage=1 00:47:15.947 --rc genhtml_function_coverage=1 00:47:15.947 --rc genhtml_legend=1 00:47:15.947 --rc geninfo_all_blocks=1 00:47:15.947 --rc geninfo_unexecuted_blocks=1 00:47:15.947 00:47:15.947 ' 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:47:15.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:15.947 --rc genhtml_branch_coverage=1 00:47:15.947 --rc genhtml_function_coverage=1 00:47:15.947 --rc genhtml_legend=1 00:47:15.947 --rc geninfo_all_blocks=1 00:47:15.947 --rc geninfo_unexecuted_blocks=1 00:47:15.947 00:47:15.947 ' 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:47:15.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:15.947 --rc genhtml_branch_coverage=1 00:47:15.947 --rc genhtml_function_coverage=1 00:47:15.947 --rc genhtml_legend=1 00:47:15.947 --rc geninfo_all_blocks=1 00:47:15.947 --rc geninfo_unexecuted_blocks=1 00:47:15.947 00:47:15.947 ' 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:15.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3410455 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3410455 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 3410455 ']' 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:15.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:15.947 14:57:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:15.947 [2024-10-07 14:57:39.527700] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:47:15.947 [2024-10-07 14:57:39.527811] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3410455 ] 00:47:15.947 [2024-10-07 14:57:39.646123] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:47:16.207 [2024-10-07 14:57:39.827257] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:47:16.207 [2024-10-07 14:57:39.827334] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:47:16.778 14:57:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:16.778 14:57:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:47:16.778 14:57:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:47:16.778 14:57:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:16.778 14:57:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:16.778 14:57:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:47:16.778 14:57:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:47:16.778 14:57:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:47:16.778 14:57:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:16.778 14:57:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:16.778 14:57:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:47:16.778 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:47:16.778 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:47:16.778 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:47:16.778 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:47:16.778 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:47:16.778 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:47:16.778 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:47:16.778 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:47:16.778 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:47:16.778 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:47:16.778 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:47:16.778 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:47:16.778 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:47:16.778 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:47:16.778 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:47:16.778 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:47:16.778 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:47:16.778 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:47:16.778 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:47:16.778 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:47:16.778 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:47:16.778 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:47:16.778 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:47:16.778 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:47:16.778 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:47:16.778 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:47:16.778 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:47:16.778 ' 00:47:19.320 [2024-10-07 14:57:42.869031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:20.702 [2024-10-07 14:57:44.077027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:47:22.611 [2024-10-07 14:57:46.295388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:47:24.521 [2024-10-07 14:57:48.200910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:47:26.436 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:47:26.436 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:47:26.436 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:47:26.436 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:47:26.436 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:47:26.436 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:47:26.436 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:47:26.436 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:47:26.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:47:26.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:47:26.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:47:26.436 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:47:26.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:47:26.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:47:26.436 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:47:26.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:47:26.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:47:26.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:47:26.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:47:26.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:47:26.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:47:26.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:47:26.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:47:26.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:47:26.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:47:26.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:47:26.436 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:47:26.436 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:47:26.436 14:57:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:47:26.436 14:57:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:26.436 14:57:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:26.436 14:57:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:47:26.436 14:57:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:26.436 14:57:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:26.436 14:57:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:47:26.436 14:57:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:47:26.697 14:57:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:47:26.697 14:57:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:47:26.697 14:57:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:47:26.697 14:57:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:26.697 14:57:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:26.697 14:57:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:47:26.697 14:57:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:26.697 14:57:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:26.697 14:57:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:47:26.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:47:26.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:47:26.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:47:26.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:47:26.697 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:47:26.697 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:47:26.697 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:47:26.697 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:47:26.697 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:47:26.697 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:47:26.697 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:47:26.697 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:47:26.697 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:47:26.697 ' 00:47:31.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:47:31.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:47:31.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:47:31.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:47:31.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:47:31.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:47:31.978 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:47:31.978 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:47:31.978 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:47:31.978 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:47:31.978 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:47:31.978 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:47:31.978 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:47:31.978 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:47:31.978 14:57:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:47:31.978 14:57:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:31.978 14:57:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:32.238 14:57:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3410455 00:47:32.238 14:57:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3410455 ']' 00:47:32.238 14:57:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3410455 00:47:32.238 14:57:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:47:32.238 14:57:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:32.238 14:57:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3410455 00:47:32.238 14:57:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:47:32.238 14:57:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:47:32.238 14:57:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3410455' 00:47:32.238 killing process with pid 3410455 00:47:32.238 14:57:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 3410455 00:47:32.238 14:57:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 3410455 00:47:33.178 14:57:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:47:33.178 14:57:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:47:33.178 14:57:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3410455 ']' 00:47:33.178 14:57:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3410455 00:47:33.178 14:57:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3410455 ']' 00:47:33.178 14:57:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3410455 00:47:33.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3410455) - No such process 00:47:33.178 14:57:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 3410455 is not found' 00:47:33.178 Process with pid 3410455 is not found 00:47:33.178 14:57:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:47:33.178 14:57:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:47:33.178 14:57:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:47:33.178 00:47:33.178 real 0m17.442s 00:47:33.178 user 0m35.315s 00:47:33.178 sys 0m0.908s 00:47:33.178 14:57:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:33.178 14:57:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:33.178 ************************************ 00:47:33.178 END TEST spdkcli_nvmf_tcp 00:47:33.178 ************************************ 00:47:33.178 14:57:56 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:47:33.178 14:57:56 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:47:33.178 14:57:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:33.178 14:57:56 -- common/autotest_common.sh@10 -- # set +x 00:47:33.178 ************************************ 00:47:33.178 START TEST nvmf_identify_passthru 00:47:33.178 ************************************ 00:47:33.178 14:57:56 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:47:33.178 * Looking for test storage... 00:47:33.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:47:33.178 14:57:56 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:47:33.178 14:57:56 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:47:33.178 14:57:56 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:47:33.439 14:57:56 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:33.439 14:57:56 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:47:33.439 14:57:56 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:33.439 14:57:56 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:47:33.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:33.439 --rc genhtml_branch_coverage=1 00:47:33.439 --rc genhtml_function_coverage=1 00:47:33.439 --rc genhtml_legend=1 00:47:33.439 --rc geninfo_all_blocks=1 00:47:33.439 --rc geninfo_unexecuted_blocks=1 00:47:33.439 00:47:33.439 ' 00:47:33.440 14:57:56 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:47:33.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:33.440 --rc genhtml_branch_coverage=1 00:47:33.440 --rc genhtml_function_coverage=1 00:47:33.440 --rc genhtml_legend=1 00:47:33.440 --rc geninfo_all_blocks=1 00:47:33.440 --rc geninfo_unexecuted_blocks=1 00:47:33.440 00:47:33.440 ' 00:47:33.440 14:57:56 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:47:33.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:33.440 --rc genhtml_branch_coverage=1 00:47:33.440 --rc genhtml_function_coverage=1 00:47:33.440 --rc genhtml_legend=1 00:47:33.440 --rc geninfo_all_blocks=1 00:47:33.440 --rc geninfo_unexecuted_blocks=1 00:47:33.440 00:47:33.440 ' 00:47:33.440 14:57:56 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:47:33.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:33.440 --rc genhtml_branch_coverage=1 00:47:33.440 --rc genhtml_function_coverage=1 00:47:33.440 --rc genhtml_legend=1 00:47:33.440 --rc geninfo_all_blocks=1 00:47:33.440 --rc geninfo_unexecuted_blocks=1 00:47:33.440 00:47:33.440 ' 00:47:33.440 14:57:56 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:33.440 14:57:56 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:47:33.440 14:57:56 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:33.440 14:57:56 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:33.440 14:57:56 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:33.440 14:57:56 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:33.440 14:57:56 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:33.440 14:57:56 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:33.440 14:57:56 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:47:33.440 14:57:56 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:33.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:33.440 14:57:56 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:33.440 14:57:56 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:47:33.440 14:57:56 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:33.440 14:57:56 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:33.440 14:57:56 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:33.440 14:57:56 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:33.440 14:57:56 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:33.440 14:57:56 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:33.440 14:57:56 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:47:33.440 14:57:56 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:33.440 14:57:56 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:33.440 14:57:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:33.440 14:57:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:47:33.440 14:57:56 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:47:33.440 14:57:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:47:41.570 Found 0000:31:00.0 (0x8086 - 0x159b) 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:47:41.570 Found 0000:31:00.1 (0x8086 - 0x159b) 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:41.570 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:47:41.571 Found net devices under 0000:31:00.0: cvl_0_0 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:47:41.571 Found net devices under 0000:31:00.1: cvl_0_1 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:47:41.571 14:58:03 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:41.571 14:58:04 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:41.571 14:58:04 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:41.571 14:58:04 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:47:41.571 14:58:04 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:41.571 14:58:04 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:41.571 14:58:04 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:41.571 14:58:04 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:47:41.571 14:58:04 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:47:41.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:41.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.511 ms 00:47:41.571 00:47:41.571 --- 10.0.0.2 ping statistics --- 00:47:41.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:41.571 rtt min/avg/max/mdev = 0.511/0.511/0.511/0.000 ms 00:47:41.571 14:58:04 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:41.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:41.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:47:41.571 00:47:41.571 --- 10.0.0.1 ping statistics --- 00:47:41.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:41.571 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:47:41.571 14:58:04 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:41.571 14:58:04 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:47:41.571 14:58:04 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:47:41.571 14:58:04 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:41.571 14:58:04 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:47:41.571 14:58:04 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:47:41.571 14:58:04 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:41.571 14:58:04 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:47:41.571 14:58:04 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:47:41.571 14:58:04 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:47:41.571 14:58:04 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:41.571 14:58:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:41.571 14:58:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:47:41.571 14:58:04 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:47:41.571 14:58:04 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:47:41.571 14:58:04 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:47:41.571 14:58:04 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:47:41.571 14:58:04 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:47:41.571 14:58:04 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:47:41.571 14:58:04 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:47:41.571 14:58:04 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:47:41.571 14:58:04 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:47:41.571 14:58:04 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:47:41.571 14:58:04 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:47:41.571 14:58:04 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:47:41.571 14:58:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:47:41.571 14:58:04 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:47:41.571 14:58:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:47:41.571 14:58:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:47:41.571 14:58:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:47:41.571 14:58:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:47:41.571 14:58:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:47:41.571 14:58:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:47:41.571 14:58:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:47:42.143 14:58:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:47:42.143 14:58:05 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:47:42.143 14:58:05 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:42.143 14:58:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:42.143 14:58:05 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:47:42.143 14:58:05 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:42.143 14:58:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:42.143 14:58:05 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3417795 00:47:42.143 14:58:05 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:47:42.143 14:58:05 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:47:42.143 14:58:05 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3417795 00:47:42.143 14:58:05 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 3417795 ']' 00:47:42.143 14:58:05 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:42.143 14:58:05 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:42.143 14:58:05 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:42.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:42.143 14:58:05 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:42.143 14:58:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:42.143 [2024-10-07 14:58:05.711469] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:47:42.143 [2024-10-07 14:58:05.711600] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:42.403 [2024-10-07 14:58:05.853391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:42.403 [2024-10-07 14:58:06.038282] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:42.403 [2024-10-07 14:58:06.038334] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:42.403 [2024-10-07 14:58:06.038346] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:42.403 [2024-10-07 14:58:06.038358] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:42.403 [2024-10-07 14:58:06.038367] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:42.403 [2024-10-07 14:58:06.040635] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:47:42.403 [2024-10-07 14:58:06.040735] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:47:42.403 [2024-10-07 14:58:06.040855] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:47:42.403 [2024-10-07 14:58:06.040885] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:47:42.973 14:58:06 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:42.973 14:58:06 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:47:42.973 14:58:06 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:47:42.973 14:58:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:42.973 14:58:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:42.973 INFO: Log level set to 20 00:47:42.973 INFO: Requests: 00:47:42.973 { 00:47:42.973 "jsonrpc": "2.0", 00:47:42.973 "method": "nvmf_set_config", 00:47:42.973 "id": 1, 00:47:42.973 "params": { 00:47:42.973 "admin_cmd_passthru": { 00:47:42.973 "identify_ctrlr": true 00:47:42.973 } 00:47:42.973 } 00:47:42.973 } 00:47:42.973 00:47:42.973 INFO: response: 00:47:42.973 { 00:47:42.973 "jsonrpc": "2.0", 00:47:42.973 "id": 1, 00:47:42.973 "result": true 00:47:42.973 } 00:47:42.973 00:47:42.973 14:58:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:42.973 14:58:06 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:47:42.973 14:58:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:42.973 14:58:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:42.973 INFO: Setting log level to 20 00:47:42.973 INFO: Setting log level to 20 00:47:42.973 INFO: Log level set to 20 00:47:42.973 INFO: Log level set to 20 00:47:42.973 INFO: Requests: 00:47:42.973 { 00:47:42.973 "jsonrpc": "2.0", 00:47:42.973 "method": "framework_start_init", 00:47:42.973 "id": 1 00:47:42.973 } 00:47:42.973 00:47:42.973 INFO: Requests: 00:47:42.973 { 00:47:42.973 "jsonrpc": "2.0", 00:47:42.973 "method": "framework_start_init", 00:47:42.973 "id": 1 00:47:42.973 } 00:47:42.973 00:47:43.233 [2024-10-07 14:58:06.748877] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:47:43.233 INFO: response: 00:47:43.233 { 00:47:43.233 "jsonrpc": "2.0", 00:47:43.233 "id": 1, 00:47:43.233 "result": true 00:47:43.233 } 00:47:43.233 00:47:43.233 INFO: response: 00:47:43.233 { 00:47:43.233 "jsonrpc": "2.0", 00:47:43.233 "id": 1, 00:47:43.233 "result": true 00:47:43.233 } 00:47:43.233 00:47:43.233 14:58:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:43.233 14:58:06 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:43.233 14:58:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:43.233 14:58:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:43.233 INFO: Setting log level to 40 00:47:43.233 INFO: Setting log level to 40 00:47:43.233 INFO: Setting log level to 40 00:47:43.233 [2024-10-07 14:58:06.764414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:43.233 14:58:06 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:43.233 14:58:06 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:47:43.233 14:58:06 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:43.233 14:58:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:43.233 14:58:06 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:47:43.233 14:58:06 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:43.233 14:58:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:43.493 Nvme0n1 00:47:43.493 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:43.493 14:58:07 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:47:43.493 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:43.493 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:43.493 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:43.493 14:58:07 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:47:43.493 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:43.493 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:43.493 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:43.493 14:58:07 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:47:43.493 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:43.493 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:43.493 [2024-10-07 14:58:07.187248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:43.493 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:43.493 14:58:07 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:47:43.493 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:43.493 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:43.754 [ 00:47:43.754 { 00:47:43.754 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:47:43.754 "subtype": "Discovery", 00:47:43.754 "listen_addresses": [], 00:47:43.754 "allow_any_host": true, 00:47:43.754 "hosts": [] 00:47:43.754 }, 00:47:43.754 { 00:47:43.754 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:47:43.754 "subtype": "NVMe", 00:47:43.754 "listen_addresses": [ 00:47:43.754 { 00:47:43.754 "trtype": "TCP", 00:47:43.754 "adrfam": "IPv4", 00:47:43.754 "traddr": "10.0.0.2", 00:47:43.754 "trsvcid": "4420" 00:47:43.754 } 00:47:43.754 ], 00:47:43.754 "allow_any_host": true, 00:47:43.754 "hosts": [], 00:47:43.754 "serial_number": "SPDK00000000000001", 00:47:43.754 "model_number": "SPDK bdev Controller", 00:47:43.754 "max_namespaces": 1, 00:47:43.754 "min_cntlid": 1, 00:47:43.754 "max_cntlid": 65519, 00:47:43.754 "namespaces": [ 00:47:43.754 { 00:47:43.754 "nsid": 1, 00:47:43.754 "bdev_name": "Nvme0n1", 00:47:43.754 "name": "Nvme0n1", 00:47:43.754 "nguid": "3634473052605494002538450000002B", 00:47:43.754 "uuid": "36344730-5260-5494-0025-38450000002b" 00:47:43.754 } 00:47:43.754 ] 00:47:43.754 } 00:47:43.754 ] 00:47:43.754 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:43.754 14:58:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:47:43.754 14:58:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:47:43.754 14:58:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:47:44.015 14:58:07 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:47:44.015 14:58:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:47:44.015 14:58:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:47:44.015 14:58:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:47:44.275 14:58:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:47:44.275 14:58:07 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:47:44.275 14:58:07 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:47:44.275 14:58:07 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:44.275 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:44.275 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:44.275 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:44.275 14:58:07 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:47:44.275 14:58:07 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:47:44.275 14:58:07 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:47:44.275 14:58:07 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:47:44.275 14:58:07 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:47:44.275 14:58:07 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:47:44.275 14:58:07 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:47:44.275 14:58:07 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:47:44.275 rmmod nvme_tcp 00:47:44.275 rmmod nvme_fabrics 00:47:44.275 rmmod nvme_keyring 00:47:44.275 14:58:07 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:47:44.275 14:58:07 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:47:44.275 14:58:07 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:47:44.275 14:58:07 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 3417795 ']' 00:47:44.275 14:58:07 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 3417795 00:47:44.275 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 3417795 ']' 00:47:44.275 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 3417795 00:47:44.275 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:47:44.275 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:47:44.275 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3417795 00:47:44.275 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:47:44.275 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:47:44.275 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3417795' 00:47:44.275 killing process with pid 3417795 00:47:44.275 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 3417795 00:47:44.275 14:58:07 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 3417795 00:47:45.218 14:58:08 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:47:45.218 14:58:08 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:47:45.218 14:58:08 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:47:45.218 14:58:08 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:47:45.218 14:58:08 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:47:45.218 14:58:08 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:47:45.218 14:58:08 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:47:45.479 14:58:08 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:47:45.479 14:58:08 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:47:45.479 14:58:08 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:45.479 14:58:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:45.479 14:58:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:47.389 14:58:11 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:47:47.389 00:47:47.389 real 0m14.280s 00:47:47.389 user 0m12.808s 00:47:47.389 sys 0m6.837s 00:47:47.389 14:58:11 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:47:47.389 14:58:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:47:47.389 ************************************ 00:47:47.389 END TEST nvmf_identify_passthru 00:47:47.389 ************************************ 00:47:47.389 14:58:11 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:47:47.389 14:58:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:47:47.389 14:58:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:47.389 14:58:11 -- common/autotest_common.sh@10 -- # set +x 00:47:47.390 ************************************ 00:47:47.390 START TEST nvmf_dif 00:47:47.390 ************************************ 00:47:47.390 14:58:11 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:47:47.650 * Looking for test storage... 00:47:47.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:47:47.650 14:58:11 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:47:47.650 14:58:11 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:47:47.650 14:58:11 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:47:47.650 14:58:11 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:47:47.650 14:58:11 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:47.650 14:58:11 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:47.650 14:58:11 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:47.650 14:58:11 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:47:47.650 14:58:11 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:47:47.650 14:58:11 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:47:47.651 14:58:11 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:47.651 14:58:11 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:47:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:47.651 --rc genhtml_branch_coverage=1 00:47:47.651 --rc genhtml_function_coverage=1 00:47:47.651 --rc genhtml_legend=1 00:47:47.651 --rc geninfo_all_blocks=1 00:47:47.651 --rc geninfo_unexecuted_blocks=1 00:47:47.651 00:47:47.651 ' 00:47:47.651 14:58:11 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:47:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:47.651 --rc genhtml_branch_coverage=1 00:47:47.651 --rc genhtml_function_coverage=1 00:47:47.651 --rc genhtml_legend=1 00:47:47.651 --rc geninfo_all_blocks=1 00:47:47.651 --rc geninfo_unexecuted_blocks=1 00:47:47.651 00:47:47.651 ' 00:47:47.651 14:58:11 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:47:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:47.651 --rc genhtml_branch_coverage=1 00:47:47.651 --rc genhtml_function_coverage=1 00:47:47.651 --rc genhtml_legend=1 00:47:47.651 --rc geninfo_all_blocks=1 00:47:47.651 --rc geninfo_unexecuted_blocks=1 00:47:47.651 00:47:47.651 ' 00:47:47.651 14:58:11 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:47:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:47.651 --rc genhtml_branch_coverage=1 00:47:47.651 --rc genhtml_function_coverage=1 00:47:47.651 --rc genhtml_legend=1 00:47:47.651 --rc geninfo_all_blocks=1 00:47:47.651 --rc geninfo_unexecuted_blocks=1 00:47:47.651 00:47:47.651 ' 00:47:47.651 14:58:11 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:47.651 14:58:11 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:47.651 14:58:11 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:47.651 14:58:11 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:47.651 14:58:11 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:47.651 14:58:11 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:47:47.651 14:58:11 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:47.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:47.651 14:58:11 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:47:47.651 14:58:11 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:47:47.651 14:58:11 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:47:47.651 14:58:11 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:47:47.651 14:58:11 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:47.651 14:58:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:47:47.651 14:58:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:47:47.651 14:58:11 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:47:47.651 14:58:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:47:55.785 14:58:17 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:55.785 14:58:18 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:55.785 14:58:18 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:55.785 14:58:18 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:55.785 14:58:18 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:47:55.785 14:58:18 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:47:55.785 14:58:18 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:47:55.785 14:58:18 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:47:55.785 14:58:18 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:47:55.785 14:58:18 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:47:55.785 14:58:18 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:55.785 14:58:18 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:47:55.785 Found 0000:31:00.0 (0x8086 - 0x159b) 00:47:55.785 14:58:18 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:55.785 14:58:18 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:55.785 14:58:18 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:55.785 14:58:18 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:55.785 14:58:18 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:55.785 14:58:18 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:47:55.785 14:58:18 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:47:55.786 Found 0000:31:00.1 (0x8086 - 0x159b) 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:47:55.786 Found net devices under 0000:31:00.0: cvl_0_0 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:47:55.786 Found net devices under 0000:31:00.1: cvl_0_1 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:47:55.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:55.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:47:55.786 00:47:55.786 --- 10.0.0.2 ping statistics --- 00:47:55.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:55.786 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:55.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:55.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:47:55.786 00:47:55.786 --- 10.0.0.1 ping statistics --- 00:47:55.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:55.786 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:47:55.786 14:58:18 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:47:57.701 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:47:57.701 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:47:57.701 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:47:57.701 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:47:57.701 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:47:57.701 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:47:57.701 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:47:57.701 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:47:57.701 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:47:57.701 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:47:57.701 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:47:57.701 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:47:57.701 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:47:57.701 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:47:57.701 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:47:57.701 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:47:57.701 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:47:58.271 14:58:21 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:58.271 14:58:21 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:47:58.271 14:58:21 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:47:58.271 14:58:21 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:58.271 14:58:21 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:47:58.271 14:58:21 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:47:58.271 14:58:21 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:47:58.271 14:58:21 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:47:58.271 14:58:21 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:47:58.271 14:58:21 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:47:58.271 14:58:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:58.271 14:58:21 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=3423970 00:47:58.271 14:58:21 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 3423970 00:47:58.271 14:58:21 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:47:58.271 14:58:21 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 3423970 ']' 00:47:58.271 14:58:21 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:58.271 14:58:21 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:47:58.271 14:58:21 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:58.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:58.271 14:58:21 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:47:58.271 14:58:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:58.272 [2024-10-07 14:58:21.873008] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:47:58.272 [2024-10-07 14:58:21.873117] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:58.533 [2024-10-07 14:58:21.996018] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:58.533 [2024-10-07 14:58:22.176422] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:58.533 [2024-10-07 14:58:22.176471] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:58.533 [2024-10-07 14:58:22.176486] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:58.533 [2024-10-07 14:58:22.176497] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:58.533 [2024-10-07 14:58:22.176506] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:58.533 [2024-10-07 14:58:22.177737] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:47:59.104 14:58:22 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:47:59.104 14:58:22 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:47:59.104 14:58:22 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:47:59.104 14:58:22 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:47:59.104 14:58:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:59.104 14:58:22 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:59.104 14:58:22 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:47:59.104 14:58:22 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:47:59.104 14:58:22 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:59.104 14:58:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:59.104 [2024-10-07 14:58:22.671339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:59.104 14:58:22 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:59.104 14:58:22 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:47:59.104 14:58:22 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:47:59.104 14:58:22 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:47:59.104 14:58:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:47:59.104 ************************************ 00:47:59.104 START TEST fio_dif_1_default 00:47:59.104 ************************************ 00:47:59.104 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:47:59.104 14:58:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:47:59.104 14:58:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:47:59.104 14:58:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:47:59.104 14:58:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:47:59.105 bdev_null0 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:47:59.105 [2024-10-07 14:58:22.755778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:47:59.105 { 00:47:59.105 "params": { 00:47:59.105 "name": "Nvme$subsystem", 00:47:59.105 "trtype": "$TEST_TRANSPORT", 00:47:59.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:47:59.105 "adrfam": "ipv4", 00:47:59.105 "trsvcid": "$NVMF_PORT", 00:47:59.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:47:59.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:47:59.105 "hdgst": ${hdgst:-false}, 00:47:59.105 "ddgst": ${ddgst:-false} 00:47:59.105 }, 00:47:59.105 "method": "bdev_nvme_attach_controller" 00:47:59.105 } 00:47:59.105 EOF 00:47:59.105 )") 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:47:59.105 "params": { 00:47:59.105 "name": "Nvme0", 00:47:59.105 "trtype": "tcp", 00:47:59.105 "traddr": "10.0.0.2", 00:47:59.105 "adrfam": "ipv4", 00:47:59.105 "trsvcid": "4420", 00:47:59.105 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:59.105 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:59.105 "hdgst": false, 00:47:59.105 "ddgst": false 00:47:59.105 }, 00:47:59.105 "method": "bdev_nvme_attach_controller" 00:47:59.105 }' 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:47:59.105 14:58:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:47:59.707 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:47:59.707 fio-3.35 00:47:59.707 Starting 1 thread 00:48:12.154 00:48:12.154 filename0: (groupid=0, jobs=1): err= 0: pid=3424669: Mon Oct 7 14:58:34 2024 00:48:12.154 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10025msec) 00:48:12.154 slat (nsec): min=5912, max=46168, avg=8027.27, stdev=2643.25 00:48:12.154 clat (usec): min=920, max=43217, avg=41743.96, stdev=2682.24 00:48:12.154 lat (usec): min=927, max=43263, avg=41751.99, stdev=2682.36 00:48:12.154 clat percentiles (usec): 00:48:12.154 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:48:12.154 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:48:12.154 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:48:12.154 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:48:12.154 | 99.99th=[43254] 00:48:12.154 bw ( KiB/s): min= 352, max= 416, per=99.73%, avg=382.40, stdev=12.61, samples=20 00:48:12.154 iops : min= 88, max= 104, avg=95.60, stdev= 3.15, samples=20 00:48:12.154 lat (usec) : 1000=0.42% 00:48:12.154 lat (msec) : 50=99.58% 00:48:12.154 cpu : usr=94.06%, sys=5.67%, ctx=12, majf=0, minf=1635 00:48:12.154 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:12.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:12.154 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:12.154 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:12.154 latency : target=0, window=0, percentile=100.00%, depth=4 00:48:12.154 00:48:12.154 Run status group 0 (all jobs): 00:48:12.154 READ: bw=383KiB/s (392kB/s), 383KiB/s-383KiB/s (392kB/s-392kB/s), io=3840KiB (3932kB), run=10025-10025msec 00:48:12.154 ----------------------------------------------------- 00:48:12.154 Suppressions used: 00:48:12.154 count bytes template 00:48:12.154 1 8 /usr/src/fio/parse.c 00:48:12.154 1 8 libtcmalloc_minimal.so 00:48:12.154 1 904 libcrypto.so 00:48:12.154 ----------------------------------------------------- 00:48:12.154 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:12.154 00:48:12.154 real 0m12.196s 00:48:12.154 user 0m22.362s 00:48:12.154 sys 0m1.138s 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:48:12.154 ************************************ 00:48:12.154 END TEST fio_dif_1_default 00:48:12.154 ************************************ 00:48:12.154 14:58:34 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:48:12.154 14:58:34 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:48:12.154 14:58:34 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:48:12.154 14:58:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:48:12.154 ************************************ 00:48:12.154 START TEST fio_dif_1_multi_subsystems 00:48:12.154 ************************************ 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:48:12.154 14:58:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:48:12.155 14:58:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:12.155 14:58:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:12.155 bdev_null0 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:12.155 [2024-10-07 14:58:35.034989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:12.155 bdev_null1 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:12.155 { 00:48:12.155 "params": { 00:48:12.155 "name": "Nvme$subsystem", 00:48:12.155 "trtype": "$TEST_TRANSPORT", 00:48:12.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:12.155 "adrfam": "ipv4", 00:48:12.155 "trsvcid": "$NVMF_PORT", 00:48:12.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:12.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:12.155 "hdgst": ${hdgst:-false}, 00:48:12.155 "ddgst": ${ddgst:-false} 00:48:12.155 }, 00:48:12.155 "method": "bdev_nvme_attach_controller" 00:48:12.155 } 00:48:12.155 EOF 00:48:12.155 )") 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:12.155 { 00:48:12.155 "params": { 00:48:12.155 "name": "Nvme$subsystem", 00:48:12.155 "trtype": "$TEST_TRANSPORT", 00:48:12.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:12.155 "adrfam": "ipv4", 00:48:12.155 "trsvcid": "$NVMF_PORT", 00:48:12.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:12.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:12.155 "hdgst": ${hdgst:-false}, 00:48:12.155 "ddgst": ${ddgst:-false} 00:48:12.155 }, 00:48:12.155 "method": "bdev_nvme_attach_controller" 00:48:12.155 } 00:48:12.155 EOF 00:48:12.155 )") 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:48:12.155 "params": { 00:48:12.155 "name": "Nvme0", 00:48:12.155 "trtype": "tcp", 00:48:12.155 "traddr": "10.0.0.2", 00:48:12.155 "adrfam": "ipv4", 00:48:12.155 "trsvcid": "4420", 00:48:12.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:12.155 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:12.155 "hdgst": false, 00:48:12.155 "ddgst": false 00:48:12.155 }, 00:48:12.155 "method": "bdev_nvme_attach_controller" 00:48:12.155 },{ 00:48:12.155 "params": { 00:48:12.155 "name": "Nvme1", 00:48:12.155 "trtype": "tcp", 00:48:12.155 "traddr": "10.0.0.2", 00:48:12.155 "adrfam": "ipv4", 00:48:12.155 "trsvcid": "4420", 00:48:12.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:12.155 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:12.155 "hdgst": false, 00:48:12.155 "ddgst": false 00:48:12.155 }, 00:48:12.155 "method": "bdev_nvme_attach_controller" 00:48:12.155 }' 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:48:12.155 14:58:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:12.155 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:48:12.155 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:48:12.155 fio-3.35 00:48:12.155 Starting 2 threads 00:48:24.394 00:48:24.394 filename0: (groupid=0, jobs=1): err= 0: pid=3427108: Mon Oct 7 14:58:46 2024 00:48:24.394 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10004msec) 00:48:24.394 slat (nsec): min=5918, max=44527, avg=8605.52, stdev=2973.47 00:48:24.394 clat (usec): min=40983, max=47293, avg=42005.22, stdev=418.75 00:48:24.394 lat (usec): min=40992, max=47337, avg=42013.83, stdev=419.53 00:48:24.394 clat percentiles (usec): 00:48:24.394 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:48:24.394 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:48:24.394 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:48:24.394 | 99.00th=[43254], 99.50th=[43254], 99.90th=[47449], 99.95th=[47449], 00:48:24.394 | 99.99th=[47449] 00:48:24.394 bw ( KiB/s): min= 352, max= 384, per=40.64%, avg=380.63, stdev=10.09, samples=19 00:48:24.394 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:48:24.394 lat (msec) : 50=100.00% 00:48:24.394 cpu : usr=95.42%, sys=4.33%, ctx=33, majf=0, minf=1633 00:48:24.394 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:24.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:24.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:24.394 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:24.394 latency : target=0, window=0, percentile=100.00%, depth=4 00:48:24.394 filename1: (groupid=0, jobs=1): err= 0: pid=3427109: Mon Oct 7 14:58:46 2024 00:48:24.394 read: IOPS=138, BW=555KiB/s (568kB/s)(5552KiB/10010msec) 00:48:24.394 slat (nsec): min=3001, max=16453, avg=7061.54, stdev=1588.93 00:48:24.394 clat (usec): min=807, max=43019, avg=28824.75, stdev=19108.39 00:48:24.394 lat (usec): min=813, max=43029, avg=28831.81, stdev=19108.16 00:48:24.394 clat percentiles (usec): 00:48:24.394 | 1.00th=[ 898], 5.00th=[ 930], 10.00th=[ 955], 20.00th=[ 971], 00:48:24.394 | 30.00th=[ 1029], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:48:24.394 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:48:24.394 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:48:24.394 | 99.99th=[43254] 00:48:24.395 bw ( KiB/s): min= 384, max= 768, per=59.14%, avg=553.60, stdev=176.22, samples=20 00:48:24.395 iops : min= 96, max= 192, avg=138.40, stdev=44.06, samples=20 00:48:24.395 lat (usec) : 1000=27.52% 00:48:24.395 lat (msec) : 2=4.47%, 50=68.01% 00:48:24.395 cpu : usr=95.38%, sys=4.39%, ctx=14, majf=0, minf=1635 00:48:24.395 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:24.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:24.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:24.395 issued rwts: total=1388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:24.395 latency : target=0, window=0, percentile=100.00%, depth=4 00:48:24.395 00:48:24.395 Run status group 0 (all jobs): 00:48:24.395 READ: bw=935KiB/s (958kB/s), 381KiB/s-555KiB/s (390kB/s-568kB/s), io=9360KiB (9585kB), run=10004-10010msec 00:48:24.395 ----------------------------------------------------- 00:48:24.395 Suppressions used: 00:48:24.395 count bytes template 00:48:24.395 2 16 /usr/src/fio/parse.c 00:48:24.395 1 8 libtcmalloc_minimal.so 00:48:24.395 1 904 libcrypto.so 00:48:24.395 ----------------------------------------------------- 00:48:24.395 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:24.395 00:48:24.395 real 0m12.566s 00:48:24.395 user 0m37.660s 00:48:24.395 sys 0m1.426s 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:48:24.395 14:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:48:24.395 ************************************ 00:48:24.395 END TEST fio_dif_1_multi_subsystems 00:48:24.395 ************************************ 00:48:24.395 14:58:47 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:48:24.395 14:58:47 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:48:24.395 14:58:47 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:48:24.395 14:58:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:48:24.395 ************************************ 00:48:24.395 START TEST fio_dif_rand_params 00:48:24.395 ************************************ 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:24.395 bdev_null0 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:24.395 [2024-10-07 14:58:47.683372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:24.395 { 00:48:24.395 "params": { 00:48:24.395 "name": "Nvme$subsystem", 00:48:24.395 "trtype": "$TEST_TRANSPORT", 00:48:24.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:24.395 "adrfam": "ipv4", 00:48:24.395 "trsvcid": "$NVMF_PORT", 00:48:24.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:24.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:24.395 "hdgst": ${hdgst:-false}, 00:48:24.395 "ddgst": ${ddgst:-false} 00:48:24.395 }, 00:48:24.395 "method": "bdev_nvme_attach_controller" 00:48:24.395 } 00:48:24.395 EOF 00:48:24.395 )") 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:48:24.395 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:24.396 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:48:24.396 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:48:24.396 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:24.396 14:58:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:48:24.396 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:24.396 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:48:24.396 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:48:24.396 14:58:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:24.396 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:24.396 14:58:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:48:24.396 14:58:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:48:24.396 14:58:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:48:24.396 "params": { 00:48:24.396 "name": "Nvme0", 00:48:24.396 "trtype": "tcp", 00:48:24.396 "traddr": "10.0.0.2", 00:48:24.396 "adrfam": "ipv4", 00:48:24.396 "trsvcid": "4420", 00:48:24.396 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:24.396 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:24.396 "hdgst": false, 00:48:24.396 "ddgst": false 00:48:24.396 }, 00:48:24.396 "method": "bdev_nvme_attach_controller" 00:48:24.396 }' 00:48:24.396 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:48:24.396 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:48:24.396 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:48:24.396 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:48:24.396 14:58:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:24.656 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:48:24.656 ... 00:48:24.656 fio-3.35 00:48:24.656 Starting 3 threads 00:48:31.229 00:48:31.229 filename0: (groupid=0, jobs=1): err= 0: pid=3429468: Mon Oct 7 14:58:53 2024 00:48:31.229 read: IOPS=219, BW=27.4MiB/s (28.7MB/s)(138MiB/5048msec) 00:48:31.229 slat (nsec): min=8286, max=52518, avg=11481.32, stdev=1960.05 00:48:31.229 clat (usec): min=8028, max=93639, avg=13626.47, stdev=4188.87 00:48:31.229 lat (usec): min=8042, max=93651, avg=13637.96, stdev=4188.94 00:48:31.229 clat percentiles (usec): 00:48:31.229 | 1.00th=[ 8225], 5.00th=[10028], 10.00th=[10945], 20.00th=[11863], 00:48:31.229 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13698], 60.00th=[13960], 00:48:31.229 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15270], 95.00th=[15795], 00:48:31.229 | 99.00th=[18482], 99.50th=[51643], 99.90th=[55313], 99.95th=[93848], 00:48:31.229 | 99.99th=[93848] 00:48:31.229 bw ( KiB/s): min=24576, max=30976, per=34.69%, avg=28262.40, stdev=1961.55, samples=10 00:48:31.229 iops : min= 192, max= 242, avg=220.80, stdev=15.32, samples=10 00:48:31.229 lat (msec) : 10=4.88%, 20=94.22%, 50=0.36%, 100=0.54% 00:48:31.229 cpu : usr=95.44%, sys=4.28%, ctx=9, majf=0, minf=1638 00:48:31.229 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:31.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:31.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:31.229 issued rwts: total=1107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:31.229 latency : target=0, window=0, percentile=100.00%, depth=3 00:48:31.229 filename0: (groupid=0, jobs=1): err= 0: pid=3429469: Mon Oct 7 14:58:53 2024 00:48:31.229 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(133MiB/5045msec) 00:48:31.229 slat (nsec): min=6125, max=51889, avg=11811.81, stdev=2302.29 00:48:31.229 clat (usec): min=7041, max=93679, avg=14196.50, stdev=5546.18 00:48:31.229 lat (usec): min=7053, max=93691, avg=14208.32, stdev=5546.03 00:48:31.229 clat percentiles (usec): 00:48:31.229 | 1.00th=[ 8586], 5.00th=[10421], 10.00th=[10945], 20.00th=[11731], 00:48:31.229 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13960], 60.00th=[14353], 00:48:31.229 | 70.00th=[14746], 80.00th=[15270], 90.00th=[15926], 95.00th=[16581], 00:48:31.229 | 99.00th=[51643], 99.50th=[52691], 99.90th=[58459], 99.95th=[93848], 00:48:31.229 | 99.99th=[93848] 00:48:31.229 bw ( KiB/s): min=23808, max=30208, per=33.25%, avg=27089.80, stdev=2154.34, samples=10 00:48:31.229 iops : min= 186, max= 236, avg=211.60, stdev=16.86, samples=10 00:48:31.229 lat (msec) : 10=3.39%, 20=95.10%, 50=0.09%, 100=1.41% 00:48:31.229 cpu : usr=94.47%, sys=5.27%, ctx=8, majf=0, minf=1635 00:48:31.229 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:31.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:31.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:31.229 issued rwts: total=1062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:31.230 latency : target=0, window=0, percentile=100.00%, depth=3 00:48:31.230 filename0: (groupid=0, jobs=1): err= 0: pid=3429470: Mon Oct 7 14:58:53 2024 00:48:31.230 read: IOPS=208, BW=26.1MiB/s (27.4MB/s)(131MiB/5002msec) 00:48:31.230 slat (nsec): min=6173, max=64048, avg=11611.80, stdev=2318.57 00:48:31.230 clat (usec): min=7727, max=55599, avg=14358.36, stdev=5808.81 00:48:31.230 lat (usec): min=7739, max=55611, avg=14369.97, stdev=5808.80 00:48:31.230 clat percentiles (usec): 00:48:31.230 | 1.00th=[ 8586], 5.00th=[10421], 10.00th=[11207], 20.00th=[11994], 00:48:31.230 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13829], 60.00th=[14222], 00:48:31.230 | 70.00th=[14615], 80.00th=[15139], 90.00th=[15795], 95.00th=[16581], 00:48:31.230 | 99.00th=[53216], 99.50th=[53740], 99.90th=[53740], 99.95th=[55837], 00:48:31.230 | 99.99th=[55837] 00:48:31.230 bw ( KiB/s): min=19200, max=29440, per=32.64%, avg=26595.56, stdev=3005.81, samples=9 00:48:31.230 iops : min= 150, max= 230, avg=207.78, stdev=23.48, samples=9 00:48:31.230 lat (msec) : 10=3.35%, 20=94.64%, 50=0.10%, 100=1.92% 00:48:31.230 cpu : usr=94.78%, sys=4.94%, ctx=9, majf=0, minf=1633 00:48:31.230 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:31.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:31.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:31.230 issued rwts: total=1044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:31.230 latency : target=0, window=0, percentile=100.00%, depth=3 00:48:31.230 00:48:31.230 Run status group 0 (all jobs): 00:48:31.230 READ: bw=79.6MiB/s (83.4MB/s), 26.1MiB/s-27.4MiB/s (27.4MB/s-28.7MB/s), io=402MiB (421MB), run=5002-5048msec 00:48:31.230 ----------------------------------------------------- 00:48:31.230 Suppressions used: 00:48:31.230 count bytes template 00:48:31.230 5 44 /usr/src/fio/parse.c 00:48:31.230 1 8 libtcmalloc_minimal.so 00:48:31.230 1 904 libcrypto.so 00:48:31.230 ----------------------------------------------------- 00:48:31.230 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:31.230 bdev_null0 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:31.230 [2024-10-07 14:58:54.705856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:31.230 bdev_null1 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:31.230 bdev_null2 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:48:31.230 14:58:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:31.230 { 00:48:31.230 "params": { 00:48:31.230 "name": "Nvme$subsystem", 00:48:31.230 "trtype": "$TEST_TRANSPORT", 00:48:31.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:31.230 "adrfam": "ipv4", 00:48:31.230 "trsvcid": "$NVMF_PORT", 00:48:31.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:31.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:31.230 "hdgst": ${hdgst:-false}, 00:48:31.230 "ddgst": ${ddgst:-false} 00:48:31.230 }, 00:48:31.230 "method": "bdev_nvme_attach_controller" 00:48:31.230 } 00:48:31.230 EOF 00:48:31.230 )") 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:31.231 { 00:48:31.231 "params": { 00:48:31.231 "name": "Nvme$subsystem", 00:48:31.231 "trtype": "$TEST_TRANSPORT", 00:48:31.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:31.231 "adrfam": "ipv4", 00:48:31.231 "trsvcid": "$NVMF_PORT", 00:48:31.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:31.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:31.231 "hdgst": ${hdgst:-false}, 00:48:31.231 "ddgst": ${ddgst:-false} 00:48:31.231 }, 00:48:31.231 "method": "bdev_nvme_attach_controller" 00:48:31.231 } 00:48:31.231 EOF 00:48:31.231 )") 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:31.231 { 00:48:31.231 "params": { 00:48:31.231 "name": "Nvme$subsystem", 00:48:31.231 "trtype": "$TEST_TRANSPORT", 00:48:31.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:31.231 "adrfam": "ipv4", 00:48:31.231 "trsvcid": "$NVMF_PORT", 00:48:31.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:31.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:31.231 "hdgst": ${hdgst:-false}, 00:48:31.231 "ddgst": ${ddgst:-false} 00:48:31.231 }, 00:48:31.231 "method": "bdev_nvme_attach_controller" 00:48:31.231 } 00:48:31.231 EOF 00:48:31.231 )") 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:48:31.231 "params": { 00:48:31.231 "name": "Nvme0", 00:48:31.231 "trtype": "tcp", 00:48:31.231 "traddr": "10.0.0.2", 00:48:31.231 "adrfam": "ipv4", 00:48:31.231 "trsvcid": "4420", 00:48:31.231 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:31.231 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:31.231 "hdgst": false, 00:48:31.231 "ddgst": false 00:48:31.231 }, 00:48:31.231 "method": "bdev_nvme_attach_controller" 00:48:31.231 },{ 00:48:31.231 "params": { 00:48:31.231 "name": "Nvme1", 00:48:31.231 "trtype": "tcp", 00:48:31.231 "traddr": "10.0.0.2", 00:48:31.231 "adrfam": "ipv4", 00:48:31.231 "trsvcid": "4420", 00:48:31.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:31.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:31.231 "hdgst": false, 00:48:31.231 "ddgst": false 00:48:31.231 }, 00:48:31.231 "method": "bdev_nvme_attach_controller" 00:48:31.231 },{ 00:48:31.231 "params": { 00:48:31.231 "name": "Nvme2", 00:48:31.231 "trtype": "tcp", 00:48:31.231 "traddr": "10.0.0.2", 00:48:31.231 "adrfam": "ipv4", 00:48:31.231 "trsvcid": "4420", 00:48:31.231 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:48:31.231 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:48:31.231 "hdgst": false, 00:48:31.231 "ddgst": false 00:48:31.231 }, 00:48:31.231 "method": "bdev_nvme_attach_controller" 00:48:31.231 }' 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:48:31.231 14:58:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:31.813 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:48:31.813 ... 00:48:31.813 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:48:31.813 ... 00:48:31.813 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:48:31.813 ... 00:48:31.813 fio-3.35 00:48:31.813 Starting 24 threads 00:48:44.037 00:48:44.037 filename0: (groupid=0, jobs=1): err= 0: pid=3431143: Mon Oct 7 14:59:06 2024 00:48:44.037 read: IOPS=433, BW=1735KiB/s (1777kB/s)(17.0MiB/10034msec) 00:48:44.037 slat (nsec): min=6333, max=97308, avg=19565.41, stdev=12432.61 00:48:44.037 clat (usec): min=5131, max=48394, avg=36728.83, stdev=2364.32 00:48:44.037 lat (usec): min=5145, max=48405, avg=36748.39, stdev=2364.44 00:48:44.037 clat percentiles (usec): 00:48:44.037 | 1.00th=[27657], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:48:44.037 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 00:48:44.037 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:48:44.037 | 99.00th=[40633], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:48:44.037 | 99.99th=[48497] 00:48:44.037 bw ( KiB/s): min= 1660, max= 1916, per=4.12%, avg=1733.60, stdev=77.15, samples=20 00:48:44.037 iops : min= 415, max= 479, avg=433.40, stdev=19.29, samples=20 00:48:44.037 lat (msec) : 10=0.02%, 20=0.34%, 50=99.63% 00:48:44.037 cpu : usr=98.70%, sys=0.94%, ctx=21, majf=0, minf=1633 00:48:44.037 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:48:44.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.037 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.037 issued rwts: total=4352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.037 filename0: (groupid=0, jobs=1): err= 0: pid=3431144: Mon Oct 7 14:59:06 2024 00:48:44.037 read: IOPS=432, BW=1729KiB/s (1770kB/s)(16.9MiB/10024msec) 00:48:44.037 slat (nsec): min=6073, max=91344, avg=24607.25, stdev=13585.72 00:48:44.037 clat (usec): min=23534, max=57092, avg=36771.53, stdev=2603.86 00:48:44.037 lat (usec): min=23543, max=57133, avg=36796.14, stdev=2604.49 00:48:44.037 clat percentiles (usec): 00:48:44.037 | 1.00th=[24511], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:48:44.037 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36439], 60.00th=[36963], 00:48:44.037 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38536], 95.00th=[39060], 00:48:44.037 | 99.00th=[40633], 99.50th=[56361], 99.90th=[56886], 99.95th=[56886], 00:48:44.037 | 99.99th=[56886] 00:48:44.037 bw ( KiB/s): min= 1637, max= 1904, per=4.10%, avg=1725.05, stdev=76.44, samples=20 00:48:44.037 iops : min= 409, max= 476, avg=431.25, stdev=19.13, samples=20 00:48:44.037 lat (msec) : 50=99.40%, 100=0.60% 00:48:44.037 cpu : usr=98.78%, sys=0.87%, ctx=14, majf=0, minf=1630 00:48:44.037 IO depths : 1=6.0%, 2=12.1%, 4=24.4%, 8=51.0%, 16=6.5%, 32=0.0%, >=64=0.0% 00:48:44.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.037 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.037 issued rwts: total=4332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.037 filename0: (groupid=0, jobs=1): err= 0: pid=3431146: Mon Oct 7 14:59:06 2024 00:48:44.037 read: IOPS=435, BW=1740KiB/s (1782kB/s)(17.0MiB/10031msec) 00:48:44.037 slat (nsec): min=6358, max=82664, avg=17597.76, stdev=10346.05 00:48:44.037 clat (usec): min=9853, max=51046, avg=36612.25, stdev=2856.47 00:48:44.037 lat (usec): min=9863, max=51075, avg=36629.85, stdev=2857.57 00:48:44.037 clat percentiles (usec): 00:48:44.037 | 1.00th=[22152], 5.00th=[35390], 10.00th=[35914], 20.00th=[36439], 00:48:44.037 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 00:48:44.037 | 70.00th=[37487], 80.00th=[37487], 90.00th=[38536], 95.00th=[39060], 00:48:44.037 | 99.00th=[40633], 99.50th=[41681], 99.90th=[43254], 99.95th=[50070], 00:48:44.037 | 99.99th=[51119] 00:48:44.037 bw ( KiB/s): min= 1641, max= 2016, per=4.13%, avg=1737.45, stdev=91.31, samples=20 00:48:44.037 iops : min= 410, max= 504, avg=434.35, stdev=22.84, samples=20 00:48:44.037 lat (msec) : 10=0.14%, 20=0.66%, 50=99.11%, 100=0.09% 00:48:44.037 cpu : usr=98.80%, sys=0.83%, ctx=18, majf=0, minf=1636 00:48:44.037 IO depths : 1=6.0%, 2=12.1%, 4=24.4%, 8=51.0%, 16=6.5%, 32=0.0%, >=64=0.0% 00:48:44.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.037 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.037 issued rwts: total=4364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.037 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.037 filename0: (groupid=0, jobs=1): err= 0: pid=3431147: Mon Oct 7 14:59:06 2024 00:48:44.037 read: IOPS=451, BW=1804KiB/s (1848kB/s)(17.6MiB/10015msec) 00:48:44.037 slat (nsec): min=5822, max=90043, avg=17334.88, stdev=12336.49 00:48:44.037 clat (usec): min=14876, max=81140, avg=35344.52, stdev=7099.79 00:48:44.037 lat (usec): min=14888, max=81162, avg=35361.85, stdev=7100.68 00:48:44.037 clat percentiles (usec): 00:48:44.037 | 1.00th=[21103], 5.00th=[23987], 10.00th=[25035], 20.00th=[29492], 00:48:44.037 | 30.00th=[34866], 40.00th=[35914], 50.00th=[36439], 60.00th=[36963], 00:48:44.037 | 70.00th=[36963], 80.00th=[38011], 90.00th=[40633], 95.00th=[50070], 00:48:44.037 | 99.00th=[57934], 99.50th=[58459], 99.90th=[66323], 99.95th=[66323], 00:48:44.037 | 99.99th=[81265] 00:48:44.037 bw ( KiB/s): min= 1600, max= 1984, per=4.26%, avg=1794.32, stdev=97.63, samples=19 00:48:44.037 iops : min= 400, max= 496, avg=448.58, stdev=24.41, samples=19 00:48:44.037 lat (msec) : 20=0.46%, 50=94.62%, 100=4.91% 00:48:44.037 cpu : usr=98.87%, sys=0.78%, ctx=15, majf=0, minf=1632 00:48:44.038 IO depths : 1=0.8%, 2=3.0%, 4=11.5%, 8=71.5%, 16=13.2%, 32=0.0%, >=64=0.0% 00:48:44.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.038 complete : 0=0.0%, 4=90.9%, 8=5.0%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.038 issued rwts: total=4518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.038 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.038 filename0: (groupid=0, jobs=1): err= 0: pid=3431148: Mon Oct 7 14:59:06 2024 00:48:44.038 read: IOPS=475, BW=1903KiB/s (1949kB/s)(18.6MiB/10021msec) 00:48:44.038 slat (usec): min=6, max=100, avg=14.15, stdev= 9.46 00:48:44.038 clat (usec): min=12645, max=50889, avg=33508.76, stdev=5411.29 00:48:44.038 lat (usec): min=12678, max=50921, avg=33522.90, stdev=5414.58 00:48:44.038 clat percentiles (usec): 00:48:44.038 | 1.00th=[21103], 5.00th=[23200], 10.00th=[24773], 20.00th=[26608], 00:48:44.038 | 30.00th=[35390], 40.00th=[35914], 50.00th=[36439], 60.00th=[36439], 00:48:44.038 | 70.00th=[36963], 80.00th=[36963], 90.00th=[37487], 95.00th=[38011], 00:48:44.038 | 99.00th=[39060], 99.50th=[39584], 99.90th=[40109], 99.95th=[40109], 00:48:44.038 | 99.99th=[51119] 00:48:44.038 bw ( KiB/s): min= 1660, max= 2560, per=4.51%, avg=1900.20, stdev=270.37, samples=20 00:48:44.038 iops : min= 415, max= 640, avg=475.05, stdev=67.59, samples=20 00:48:44.038 lat (msec) : 20=0.67%, 50=99.29%, 100=0.04% 00:48:44.038 cpu : usr=98.38%, sys=1.09%, ctx=71, majf=0, minf=1637 00:48:44.038 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:48:44.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.038 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.038 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.038 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.038 filename0: (groupid=0, jobs=1): err= 0: pid=3431149: Mon Oct 7 14:59:06 2024 00:48:44.038 read: IOPS=429, BW=1719KiB/s (1760kB/s)(16.8MiB/10016msec) 00:48:44.038 slat (nsec): min=6500, max=59331, avg=18007.92, stdev=8599.95 00:48:44.038 clat (usec): min=22263, max=83629, avg=37071.08, stdev=2324.76 00:48:44.038 lat (usec): min=22276, max=83657, avg=37089.09, stdev=2324.71 00:48:44.038 clat percentiles (usec): 00:48:44.038 | 1.00th=[34866], 5.00th=[35914], 10.00th=[35914], 20.00th=[36439], 00:48:44.038 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 00:48:44.038 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:48:44.038 | 99.00th=[40633], 99.50th=[42206], 99.90th=[62653], 99.95th=[62653], 00:48:44.038 | 99.99th=[83362] 00:48:44.038 bw ( KiB/s): min= 1536, max= 1792, per=4.07%, avg=1714.65, stdev=76.63, samples=20 00:48:44.038 iops : min= 384, max= 448, avg=428.65, stdev=19.17, samples=20 00:48:44.038 lat (msec) : 50=99.58%, 100=0.42% 00:48:44.038 cpu : usr=98.74%, sys=0.92%, ctx=20, majf=0, minf=1635 00:48:44.038 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:48:44.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.038 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.038 issued rwts: total=4304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.038 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.038 filename0: (groupid=0, jobs=1): err= 0: pid=3431151: Mon Oct 7 14:59:06 2024 00:48:44.038 read: IOPS=429, BW=1720KiB/s (1761kB/s)(16.8MiB/10026msec) 00:48:44.038 slat (nsec): min=6639, max=79551, avg=22456.98, stdev=11146.67 00:48:44.038 clat (usec): min=19899, max=62225, avg=37024.85, stdev=2818.14 00:48:44.038 lat (usec): min=19911, max=62253, avg=37047.31, stdev=2818.60 00:48:44.038 clat percentiles (usec): 00:48:44.038 | 1.00th=[24511], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:48:44.038 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 00:48:44.038 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:48:44.038 | 99.00th=[48497], 99.50th=[50594], 99.90th=[62129], 99.95th=[62129], 00:48:44.038 | 99.99th=[62129] 00:48:44.038 bw ( KiB/s): min= 1615, max= 1792, per=4.07%, avg=1715.10, stdev=66.03, samples=20 00:48:44.038 iops : min= 403, max= 448, avg=428.70, stdev=16.57, samples=20 00:48:44.038 lat (msec) : 20=0.16%, 50=99.00%, 100=0.84% 00:48:44.038 cpu : usr=98.84%, sys=0.80%, ctx=15, majf=0, minf=1632 00:48:44.038 IO depths : 1=5.9%, 2=11.9%, 4=24.3%, 8=51.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:48:44.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.038 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.038 issued rwts: total=4310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.038 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.038 filename0: (groupid=0, jobs=1): err= 0: pid=3431152: Mon Oct 7 14:59:06 2024 00:48:44.038 read: IOPS=431, BW=1728KiB/s (1769kB/s)(16.9MiB/10015msec) 00:48:44.038 slat (nsec): min=6187, max=81185, avg=16396.03, stdev=10054.15 00:48:44.038 clat (usec): min=20685, max=53023, avg=36901.52, stdev=1936.76 00:48:44.038 lat (usec): min=20698, max=53031, avg=36917.92, stdev=1936.14 00:48:44.038 clat percentiles (usec): 00:48:44.038 | 1.00th=[26084], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:48:44.038 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 00:48:44.038 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:48:44.038 | 99.00th=[40109], 99.50th=[40109], 99.90th=[53216], 99.95th=[53216], 00:48:44.038 | 99.99th=[53216] 00:48:44.038 bw ( KiB/s): min= 1660, max= 1840, per=4.09%, avg=1723.40, stdev=68.63, samples=20 00:48:44.038 iops : min= 415, max= 460, avg=430.85, stdev=17.16, samples=20 00:48:44.038 lat (msec) : 50=99.86%, 100=0.14% 00:48:44.038 cpu : usr=98.93%, sys=0.70%, ctx=19, majf=0, minf=1635 00:48:44.038 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:48:44.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.038 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.038 issued rwts: total=4326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.038 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.038 filename1: (groupid=0, jobs=1): err= 0: pid=3431153: Mon Oct 7 14:59:06 2024 00:48:44.038 read: IOPS=454, BW=1817KiB/s (1860kB/s)(17.8MiB/10005msec) 00:48:44.038 slat (nsec): min=6055, max=92027, avg=16677.43, stdev=12293.62 00:48:44.038 clat (usec): min=13783, max=73830, avg=35111.20, stdev=6080.26 00:48:44.038 lat (usec): min=13820, max=73857, avg=35127.88, stdev=6081.44 00:48:44.038 clat percentiles (usec): 00:48:44.038 | 1.00th=[22938], 5.00th=[24511], 10.00th=[26084], 20.00th=[30278], 00:48:44.038 | 30.00th=[32637], 40.00th=[35914], 50.00th=[36439], 60.00th=[36439], 00:48:44.038 | 70.00th=[36963], 80.00th=[38011], 90.00th=[41157], 95.00th=[44303], 00:48:44.038 | 99.00th=[54264], 99.50th=[56886], 99.90th=[63701], 99.95th=[63701], 00:48:44.038 | 99.99th=[73925] 00:48:44.038 bw ( KiB/s): min= 1603, max= 1984, per=4.31%, avg=1813.84, stdev=112.90, samples=19 00:48:44.038 iops : min= 400, max= 496, avg=453.42, stdev=28.30, samples=19 00:48:44.038 lat (msec) : 20=0.44%, 50=97.80%, 100=1.76% 00:48:44.038 cpu : usr=98.81%, sys=0.83%, ctx=13, majf=0, minf=1634 00:48:44.038 IO depths : 1=1.7%, 2=3.7%, 4=10.3%, 8=71.6%, 16=12.6%, 32=0.0%, >=64=0.0% 00:48:44.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.038 complete : 0=0.0%, 4=90.4%, 8=5.8%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.038 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.038 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.038 filename1: (groupid=0, jobs=1): err= 0: pid=3431155: Mon Oct 7 14:59:06 2024 00:48:44.038 read: IOPS=430, BW=1720KiB/s (1762kB/s)(16.8MiB/10007msec) 00:48:44.038 slat (nsec): min=6058, max=92869, avg=24241.77, stdev=13938.48 00:48:44.038 clat (usec): min=11498, max=73263, avg=36957.33, stdev=2935.82 00:48:44.038 lat (usec): min=11504, max=73293, avg=36981.57, stdev=2935.43 00:48:44.038 clat percentiles (usec): 00:48:44.038 | 1.00th=[34341], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:48:44.038 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 00:48:44.038 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:48:44.038 | 99.00th=[40109], 99.50th=[41157], 99.90th=[72877], 99.95th=[72877], 00:48:44.038 | 99.99th=[72877] 00:48:44.038 bw ( KiB/s): min= 1536, max= 1792, per=4.06%, avg=1710.95, stdev=87.35, samples=19 00:48:44.038 iops : min= 384, max= 448, avg=427.74, stdev=21.84, samples=19 00:48:44.038 lat (msec) : 20=0.37%, 50=99.26%, 100=0.37% 00:48:44.038 cpu : usr=98.71%, sys=0.93%, ctx=13, majf=0, minf=1634 00:48:44.038 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:48:44.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.038 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.038 issued rwts: total=4304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.038 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.038 filename1: (groupid=0, jobs=1): err= 0: pid=3431156: Mon Oct 7 14:59:06 2024 00:48:44.038 read: IOPS=445, BW=1781KiB/s (1823kB/s)(17.4MiB/10006msec) 00:48:44.038 slat (nsec): min=6092, max=89067, avg=18063.20, stdev=11734.52 00:48:44.038 clat (usec): min=15753, max=63522, avg=35797.85, stdev=5127.75 00:48:44.038 lat (usec): min=15764, max=63550, avg=35815.91, stdev=5129.07 00:48:44.038 clat percentiles (usec): 00:48:44.038 | 1.00th=[22414], 5.00th=[25297], 10.00th=[28181], 20.00th=[33817], 00:48:44.038 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36963], 00:48:44.038 | 70.00th=[37487], 80.00th=[38011], 90.00th=[39060], 95.00th=[41681], 00:48:44.038 | 99.00th=[52167], 99.50th=[55313], 99.90th=[63701], 99.95th=[63701], 00:48:44.038 | 99.99th=[63701] 00:48:44.038 bw ( KiB/s): min= 1539, max= 1952, per=4.23%, avg=1781.00, stdev=111.50, samples=19 00:48:44.038 iops : min= 384, max= 488, avg=445.21, stdev=27.97, samples=19 00:48:44.038 lat (msec) : 20=0.54%, 50=97.89%, 100=1.57% 00:48:44.038 cpu : usr=98.77%, sys=0.87%, ctx=14, majf=0, minf=1631 00:48:44.038 IO depths : 1=3.5%, 2=7.3%, 4=16.4%, 8=62.8%, 16=10.0%, 32=0.0%, >=64=0.0% 00:48:44.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.038 complete : 0=0.0%, 4=91.9%, 8=3.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.038 issued rwts: total=4454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.038 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.038 filename1: (groupid=0, jobs=1): err= 0: pid=3431157: Mon Oct 7 14:59:06 2024 00:48:44.038 read: IOPS=437, BW=1749KiB/s (1791kB/s)(17.1MiB/10004msec) 00:48:44.038 slat (nsec): min=6210, max=71114, avg=16134.12, stdev=10073.74 00:48:44.038 clat (usec): min=10224, max=49989, avg=36460.13, stdev=3484.29 00:48:44.038 lat (usec): min=10234, max=50011, avg=36476.26, stdev=3485.42 00:48:44.038 clat percentiles (usec): 00:48:44.038 | 1.00th=[17171], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:48:44.038 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 00:48:44.038 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:48:44.038 | 99.00th=[40633], 99.50th=[40633], 99.90th=[41157], 99.95th=[46400], 00:48:44.038 | 99.99th=[50070] 00:48:44.038 bw ( KiB/s): min= 1660, max= 2096, per=4.16%, avg=1753.47, stdev=104.45, samples=19 00:48:44.038 iops : min= 415, max= 524, avg=438.37, stdev=26.11, samples=19 00:48:44.038 lat (msec) : 20=1.46%, 50=98.54% 00:48:44.038 cpu : usr=98.90%, sys=0.75%, ctx=15, majf=0, minf=1636 00:48:44.038 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:48:44.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.039 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.039 issued rwts: total=4374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.039 filename1: (groupid=0, jobs=1): err= 0: pid=3431158: Mon Oct 7 14:59:06 2024 00:48:44.039 read: IOPS=431, BW=1725KiB/s (1766kB/s)(16.9MiB/10006msec) 00:48:44.039 slat (nsec): min=5662, max=86449, avg=20348.27, stdev=13065.06 00:48:44.039 clat (usec): min=17788, max=75188, avg=36897.04, stdev=3948.87 00:48:44.039 lat (usec): min=17795, max=75210, avg=36917.39, stdev=3949.10 00:48:44.039 clat percentiles (usec): 00:48:44.039 | 1.00th=[22938], 5.00th=[34341], 10.00th=[35914], 20.00th=[35914], 00:48:44.039 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 00:48:44.039 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38536], 95.00th=[39584], 00:48:44.039 | 99.00th=[54789], 99.50th=[58459], 99.90th=[74974], 99.95th=[74974], 00:48:44.039 | 99.99th=[74974] 00:48:44.039 bw ( KiB/s): min= 1536, max= 1840, per=4.09%, avg=1723.58, stdev=78.89, samples=19 00:48:44.039 iops : min= 384, max= 460, avg=430.89, stdev=19.72, samples=19 00:48:44.039 lat (msec) : 20=0.32%, 50=98.42%, 100=1.25% 00:48:44.039 cpu : usr=98.90%, sys=0.74%, ctx=14, majf=0, minf=1635 00:48:44.039 IO depths : 1=5.5%, 2=11.4%, 4=23.7%, 8=52.4%, 16=7.0%, 32=0.0%, >=64=0.0% 00:48:44.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.039 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.039 issued rwts: total=4314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.039 filename1: (groupid=0, jobs=1): err= 0: pid=3431160: Mon Oct 7 14:59:06 2024 00:48:44.039 read: IOPS=443, BW=1776KiB/s (1818kB/s)(17.4MiB/10007msec) 00:48:44.039 slat (nsec): min=6109, max=84737, avg=17529.10, stdev=13183.06 00:48:44.039 clat (usec): min=8114, max=86573, avg=35940.22, stdev=6564.28 00:48:44.039 lat (usec): min=8121, max=86605, avg=35957.75, stdev=6564.84 00:48:44.039 clat percentiles (usec): 00:48:44.039 | 1.00th=[20579], 5.00th=[25560], 10.00th=[28181], 20.00th=[31065], 00:48:44.039 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36963], 00:48:44.039 | 70.00th=[37487], 80.00th=[38536], 90.00th=[41681], 95.00th=[44827], 00:48:44.039 | 99.00th=[56361], 99.50th=[58459], 99.90th=[86508], 99.95th=[86508], 00:48:44.039 | 99.99th=[86508] 00:48:44.039 bw ( KiB/s): min= 1539, max= 2064, per=4.19%, avg=1763.32, stdev=117.64, samples=19 00:48:44.039 iops : min= 384, max= 516, avg=440.79, stdev=29.49, samples=19 00:48:44.039 lat (msec) : 10=0.14%, 20=0.54%, 50=96.71%, 100=2.61% 00:48:44.039 cpu : usr=98.73%, sys=0.93%, ctx=13, majf=0, minf=1632 00:48:44.039 IO depths : 1=1.2%, 2=2.7%, 4=8.2%, 8=74.2%, 16=13.7%, 32=0.0%, >=64=0.0% 00:48:44.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.039 complete : 0=0.0%, 4=90.1%, 8=6.7%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.039 issued rwts: total=4442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.039 filename1: (groupid=0, jobs=1): err= 0: pid=3431161: Mon Oct 7 14:59:06 2024 00:48:44.039 read: IOPS=429, BW=1719KiB/s (1760kB/s)(16.8MiB/10017msec) 00:48:44.039 slat (nsec): min=6398, max=77466, avg=20451.54, stdev=11574.71 00:48:44.039 clat (usec): min=19851, max=53372, avg=37056.08, stdev=2093.03 00:48:44.039 lat (usec): min=19862, max=53396, avg=37076.53, stdev=2092.40 00:48:44.039 clat percentiles (usec): 00:48:44.039 | 1.00th=[33817], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:48:44.039 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 00:48:44.039 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:48:44.039 | 99.00th=[45876], 99.50th=[50594], 99.90th=[53216], 99.95th=[53216], 00:48:44.039 | 99.99th=[53216] 00:48:44.039 bw ( KiB/s): min= 1664, max= 1792, per=4.08%, avg=1717.68, stdev=64.68, samples=19 00:48:44.039 iops : min= 416, max= 448, avg=429.42, stdev=16.17, samples=19 00:48:44.039 lat (msec) : 20=0.21%, 50=99.28%, 100=0.51% 00:48:44.039 cpu : usr=98.77%, sys=0.88%, ctx=17, majf=0, minf=1636 00:48:44.039 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:48:44.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.039 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.039 issued rwts: total=4304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.039 filename1: (groupid=0, jobs=1): err= 0: pid=3431162: Mon Oct 7 14:59:06 2024 00:48:44.039 read: IOPS=464, BW=1858KiB/s (1903kB/s)(18.2MiB/10014msec) 00:48:44.039 slat (nsec): min=6048, max=76909, avg=14917.17, stdev=10681.23 00:48:44.039 clat (usec): min=14807, max=59071, avg=34360.26, stdev=6227.34 00:48:44.039 lat (usec): min=14855, max=59085, avg=34375.17, stdev=6228.93 00:48:44.039 clat percentiles (usec): 00:48:44.039 | 1.00th=[18744], 5.00th=[23725], 10.00th=[24773], 20.00th=[27919], 00:48:44.039 | 30.00th=[32113], 40.00th=[35914], 50.00th=[36439], 60.00th=[36439], 00:48:44.039 | 70.00th=[36963], 80.00th=[37487], 90.00th=[39060], 95.00th=[42730], 00:48:44.039 | 99.00th=[51643], 99.50th=[57410], 99.90th=[58983], 99.95th=[58983], 00:48:44.039 | 99.99th=[58983] 00:48:44.039 bw ( KiB/s): min= 1664, max= 2064, per=4.39%, avg=1849.05, stdev=99.53, samples=19 00:48:44.039 iops : min= 416, max= 516, avg=462.26, stdev=24.88, samples=19 00:48:44.039 lat (msec) : 20=1.46%, 50=97.10%, 100=1.44% 00:48:44.039 cpu : usr=98.82%, sys=0.84%, ctx=14, majf=0, minf=1635 00:48:44.039 IO depths : 1=0.6%, 2=1.4%, 4=5.2%, 8=77.7%, 16=15.1%, 32=0.0%, >=64=0.0% 00:48:44.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.039 complete : 0=0.0%, 4=89.6%, 8=7.8%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.039 issued rwts: total=4652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.039 filename2: (groupid=0, jobs=1): err= 0: pid=3431163: Mon Oct 7 14:59:06 2024 00:48:44.039 read: IOPS=432, BW=1730KiB/s (1771kB/s)(16.9MiB/10027msec) 00:48:44.039 slat (nsec): min=6040, max=86999, avg=18727.34, stdev=10134.32 00:48:44.039 clat (usec): min=24693, max=47262, avg=36838.90, stdev=1645.84 00:48:44.039 lat (usec): min=24703, max=47290, avg=36857.63, stdev=1647.05 00:48:44.039 clat percentiles (usec): 00:48:44.039 | 1.00th=[29492], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:48:44.039 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 00:48:44.039 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:48:44.039 | 99.00th=[40109], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:48:44.039 | 99.99th=[47449] 00:48:44.039 bw ( KiB/s): min= 1650, max= 1792, per=4.10%, avg=1726.50, stdev=66.46, samples=20 00:48:44.039 iops : min= 412, max= 448, avg=431.60, stdev=16.65, samples=20 00:48:44.039 lat (msec) : 50=100.00% 00:48:44.039 cpu : usr=98.90%, sys=0.76%, ctx=14, majf=0, minf=1635 00:48:44.039 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:48:44.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.039 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.039 issued rwts: total=4336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.039 filename2: (groupid=0, jobs=1): err= 0: pid=3431165: Mon Oct 7 14:59:06 2024 00:48:44.039 read: IOPS=433, BW=1733KiB/s (1774kB/s)(16.9MiB/10010msec) 00:48:44.039 slat (nsec): min=6265, max=89976, avg=20413.35, stdev=11272.23 00:48:44.039 clat (usec): min=10836, max=46238, avg=36756.01, stdev=2424.12 00:48:44.039 lat (usec): min=10845, max=46247, avg=36776.42, stdev=2424.52 00:48:44.039 clat percentiles (usec): 00:48:44.039 | 1.00th=[25560], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:48:44.039 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 00:48:44.039 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:48:44.039 | 99.00th=[40109], 99.50th=[40109], 99.90th=[46400], 99.95th=[46400], 00:48:44.039 | 99.99th=[46400] 00:48:44.039 bw ( KiB/s): min= 1660, max= 1920, per=4.11%, avg=1730.74, stdev=78.17, samples=19 00:48:44.039 iops : min= 415, max= 480, avg=432.68, stdev=19.54, samples=19 00:48:44.039 lat (msec) : 20=0.65%, 50=99.35% 00:48:44.039 cpu : usr=98.68%, sys=0.97%, ctx=14, majf=0, minf=1634 00:48:44.039 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:48:44.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.039 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.039 issued rwts: total=4336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.039 filename2: (groupid=0, jobs=1): err= 0: pid=3431166: Mon Oct 7 14:59:06 2024 00:48:44.039 read: IOPS=431, BW=1724KiB/s (1766kB/s)(16.9MiB/10022msec) 00:48:44.039 slat (nsec): min=5982, max=87054, avg=18255.03, stdev=13644.10 00:48:44.039 clat (usec): min=27707, max=49495, avg=36962.76, stdev=1322.56 00:48:44.039 lat (usec): min=27716, max=49502, avg=36981.02, stdev=1319.98 00:48:44.039 clat percentiles (usec): 00:48:44.039 | 1.00th=[34866], 5.00th=[35914], 10.00th=[35914], 20.00th=[35914], 00:48:44.039 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 00:48:44.039 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:48:44.039 | 99.00th=[39584], 99.50th=[40109], 99.90th=[41157], 99.95th=[41157], 00:48:44.039 | 99.99th=[49546] 00:48:44.039 bw ( KiB/s): min= 1660, max= 1792, per=4.09%, avg=1721.00, stdev=65.49, samples=20 00:48:44.039 iops : min= 415, max= 448, avg=430.25, stdev=16.37, samples=20 00:48:44.039 lat (msec) : 50=100.00% 00:48:44.039 cpu : usr=98.71%, sys=0.95%, ctx=16, majf=0, minf=1635 00:48:44.039 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:48:44.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.039 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.039 issued rwts: total=4320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.039 filename2: (groupid=0, jobs=1): err= 0: pid=3431167: Mon Oct 7 14:59:06 2024 00:48:44.039 read: IOPS=432, BW=1731KiB/s (1773kB/s)(17.0MiB/10032msec) 00:48:44.039 slat (nsec): min=5996, max=65529, avg=17261.34, stdev=8418.82 00:48:44.039 clat (usec): min=20089, max=53536, avg=36819.51, stdev=2260.57 00:48:44.039 lat (usec): min=20098, max=53559, avg=36836.77, stdev=2260.83 00:48:44.039 clat percentiles (usec): 00:48:44.039 | 1.00th=[25822], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:48:44.039 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 00:48:44.039 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:48:44.040 | 99.00th=[40633], 99.50th=[46400], 99.90th=[53216], 99.95th=[53740], 00:48:44.040 | 99.99th=[53740] 00:48:44.040 bw ( KiB/s): min= 1648, max= 1856, per=4.11%, avg=1730.20, stdev=71.43, samples=20 00:48:44.040 iops : min= 412, max= 464, avg=432.55, stdev=17.86, samples=20 00:48:44.040 lat (msec) : 50=99.82%, 100=0.18% 00:48:44.040 cpu : usr=98.65%, sys=0.92%, ctx=46, majf=0, minf=1633 00:48:44.040 IO depths : 1=5.9%, 2=11.9%, 4=24.2%, 8=51.3%, 16=6.7%, 32=0.0%, >=64=0.0% 00:48:44.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.040 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.040 issued rwts: total=4342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.040 filename2: (groupid=0, jobs=1): err= 0: pid=3431168: Mon Oct 7 14:59:06 2024 00:48:44.040 read: IOPS=441, BW=1767KiB/s (1809kB/s)(17.3MiB/10006msec) 00:48:44.040 slat (nsec): min=6021, max=59824, avg=17595.71, stdev=8954.79 00:48:44.040 clat (usec): min=18700, max=59582, avg=36071.94, stdev=4646.83 00:48:44.040 lat (usec): min=18707, max=59593, avg=36089.54, stdev=4648.24 00:48:44.040 clat percentiles (usec): 00:48:44.040 | 1.00th=[23200], 5.00th=[25560], 10.00th=[31065], 20.00th=[35914], 00:48:44.040 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36963], 00:48:44.040 | 70.00th=[36963], 80.00th=[37487], 90.00th=[38536], 95.00th=[39060], 00:48:44.040 | 99.00th=[57410], 99.50th=[58459], 99.90th=[59507], 99.95th=[59507], 00:48:44.040 | 99.99th=[59507] 00:48:44.040 bw ( KiB/s): min= 1632, max= 2064, per=4.18%, avg=1761.47, stdev=110.78, samples=19 00:48:44.040 iops : min= 408, max= 516, avg=440.37, stdev=27.70, samples=19 00:48:44.040 lat (msec) : 20=0.14%, 50=97.92%, 100=1.95% 00:48:44.040 cpu : usr=98.72%, sys=0.96%, ctx=29, majf=0, minf=1632 00:48:44.040 IO depths : 1=5.0%, 2=10.0%, 4=21.4%, 8=56.0%, 16=7.7%, 32=0.0%, >=64=0.0% 00:48:44.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.040 complete : 0=0.0%, 4=93.1%, 8=1.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.040 issued rwts: total=4420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.040 filename2: (groupid=0, jobs=1): err= 0: pid=3431170: Mon Oct 7 14:59:06 2024 00:48:44.040 read: IOPS=429, BW=1720KiB/s (1761kB/s)(16.8MiB/10011msec) 00:48:44.040 slat (nsec): min=6360, max=95682, avg=23816.94, stdev=13824.55 00:48:44.040 clat (usec): min=22899, max=57288, avg=37007.40, stdev=2048.66 00:48:44.040 lat (usec): min=22926, max=57302, avg=37031.21, stdev=2047.66 00:48:44.040 clat percentiles (usec): 00:48:44.040 | 1.00th=[30540], 5.00th=[35390], 10.00th=[35914], 20.00th=[35914], 00:48:44.040 | 30.00th=[36439], 40.00th=[36439], 50.00th=[36963], 60.00th=[36963], 00:48:44.040 | 70.00th=[37487], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060], 00:48:44.040 | 99.00th=[47973], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:48:44.040 | 99.99th=[57410] 00:48:44.040 bw ( KiB/s): min= 1660, max= 1792, per=4.07%, avg=1715.15, stdev=62.91, samples=20 00:48:44.040 iops : min= 415, max= 448, avg=428.75, stdev=15.76, samples=20 00:48:44.040 lat (msec) : 50=99.35%, 100=0.65% 00:48:44.040 cpu : usr=98.59%, sys=1.07%, ctx=18, majf=0, minf=1633 00:48:44.040 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:48:44.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.040 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.040 issued rwts: total=4304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.040 filename2: (groupid=0, jobs=1): err= 0: pid=3431171: Mon Oct 7 14:59:06 2024 00:48:44.040 read: IOPS=442, BW=1772KiB/s (1814kB/s)(17.3MiB/10006msec) 00:48:44.040 slat (nsec): min=5579, max=80144, avg=20489.76, stdev=12319.26 00:48:44.040 clat (usec): min=22093, max=80454, avg=35932.41, stdev=4523.71 00:48:44.040 lat (usec): min=22103, max=80476, avg=35952.90, stdev=4525.75 00:48:44.040 clat percentiles (usec): 00:48:44.040 | 1.00th=[24249], 5.00th=[25822], 10.00th=[29492], 20.00th=[35914], 00:48:44.040 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36963], 00:48:44.040 | 70.00th=[36963], 80.00th=[37487], 90.00th=[38536], 95.00th=[39060], 00:48:44.040 | 99.00th=[52691], 99.50th=[57934], 99.90th=[60031], 99.95th=[60031], 00:48:44.040 | 99.99th=[80217] 00:48:44.040 bw ( KiB/s): min= 1584, max= 2080, per=4.19%, avg=1764.84, stdev=129.62, samples=19 00:48:44.040 iops : min= 396, max= 520, avg=441.21, stdev=32.40, samples=19 00:48:44.040 lat (msec) : 50=98.47%, 100=1.53% 00:48:44.040 cpu : usr=98.59%, sys=0.98%, ctx=39, majf=0, minf=1633 00:48:44.040 IO depths : 1=4.9%, 2=10.0%, 4=21.2%, 8=56.1%, 16=7.7%, 32=0.0%, >=64=0.0% 00:48:44.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.040 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.040 issued rwts: total=4432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.040 filename2: (groupid=0, jobs=1): err= 0: pid=3431172: Mon Oct 7 14:59:06 2024 00:48:44.040 read: IOPS=444, BW=1778KiB/s (1821kB/s)(17.4MiB/10009msec) 00:48:44.040 slat (nsec): min=4166, max=91095, avg=17251.69, stdev=11208.17 00:48:44.040 clat (usec): min=15121, max=84686, avg=35866.12, stdev=5316.28 00:48:44.040 lat (usec): min=15158, max=84715, avg=35883.37, stdev=5317.54 00:48:44.040 clat percentiles (usec): 00:48:44.040 | 1.00th=[21890], 5.00th=[24249], 10.00th=[28443], 20.00th=[32900], 00:48:44.040 | 30.00th=[35914], 40.00th=[36439], 50.00th=[36439], 60.00th=[36963], 00:48:44.040 | 70.00th=[37487], 80.00th=[38011], 90.00th=[39584], 95.00th=[42206], 00:48:44.040 | 99.00th=[53216], 99.50th=[61080], 99.90th=[61080], 99.95th=[61080], 00:48:44.040 | 99.99th=[84411] 00:48:44.040 bw ( KiB/s): min= 1552, max= 1936, per=4.23%, avg=1779.16, stdev=109.78, samples=19 00:48:44.040 iops : min= 388, max= 484, avg=444.79, stdev=27.44, samples=19 00:48:44.040 lat (msec) : 20=0.09%, 50=98.38%, 100=1.53% 00:48:44.040 cpu : usr=98.77%, sys=0.88%, ctx=16, majf=0, minf=1634 00:48:44.040 IO depths : 1=2.9%, 2=5.9%, 4=13.6%, 8=66.4%, 16=11.2%, 32=0.0%, >=64=0.0% 00:48:44.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.040 complete : 0=0.0%, 4=91.2%, 8=4.6%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:44.040 issued rwts: total=4450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:44.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:48:44.040 00:48:44.040 Run status group 0 (all jobs): 00:48:44.040 READ: bw=41.1MiB/s (43.1MB/s), 1719KiB/s-1903KiB/s (1760kB/s-1949kB/s), io=413MiB (433MB), run=10004-10034msec 00:48:44.040 ----------------------------------------------------- 00:48:44.040 Suppressions used: 00:48:44.040 count bytes template 00:48:44.040 45 402 /usr/src/fio/parse.c 00:48:44.040 1 8 libtcmalloc_minimal.so 00:48:44.040 1 904 libcrypto.so 00:48:44.040 ----------------------------------------------------- 00:48:44.040 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:44.040 bdev_null0 00:48:44.040 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:44.041 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:44.041 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:44.041 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:44.041 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:44.041 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:44.041 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:44.041 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:44.301 [2024-10-07 14:59:07.761791] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:44.301 bdev_null1 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:48:44.301 14:59:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:44.301 { 00:48:44.301 "params": { 00:48:44.301 "name": "Nvme$subsystem", 00:48:44.301 "trtype": "$TEST_TRANSPORT", 00:48:44.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:44.301 "adrfam": "ipv4", 00:48:44.301 "trsvcid": "$NVMF_PORT", 00:48:44.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:44.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:44.301 "hdgst": ${hdgst:-false}, 00:48:44.301 "ddgst": ${ddgst:-false} 00:48:44.301 }, 00:48:44.302 "method": "bdev_nvme_attach_controller" 00:48:44.302 } 00:48:44.302 EOF 00:48:44.302 )") 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:44.302 { 00:48:44.302 "params": { 00:48:44.302 "name": "Nvme$subsystem", 00:48:44.302 "trtype": "$TEST_TRANSPORT", 00:48:44.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:44.302 "adrfam": "ipv4", 00:48:44.302 "trsvcid": "$NVMF_PORT", 00:48:44.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:44.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:44.302 "hdgst": ${hdgst:-false}, 00:48:44.302 "ddgst": ${ddgst:-false} 00:48:44.302 }, 00:48:44.302 "method": "bdev_nvme_attach_controller" 00:48:44.302 } 00:48:44.302 EOF 00:48:44.302 )") 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:48:44.302 "params": { 00:48:44.302 "name": "Nvme0", 00:48:44.302 "trtype": "tcp", 00:48:44.302 "traddr": "10.0.0.2", 00:48:44.302 "adrfam": "ipv4", 00:48:44.302 "trsvcid": "4420", 00:48:44.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:44.302 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:44.302 "hdgst": false, 00:48:44.302 "ddgst": false 00:48:44.302 }, 00:48:44.302 "method": "bdev_nvme_attach_controller" 00:48:44.302 },{ 00:48:44.302 "params": { 00:48:44.302 "name": "Nvme1", 00:48:44.302 "trtype": "tcp", 00:48:44.302 "traddr": "10.0.0.2", 00:48:44.302 "adrfam": "ipv4", 00:48:44.302 "trsvcid": "4420", 00:48:44.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:48:44.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:48:44.302 "hdgst": false, 00:48:44.302 "ddgst": false 00:48:44.302 }, 00:48:44.302 "method": "bdev_nvme_attach_controller" 00:48:44.302 }' 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:48:44.302 14:59:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:44.869 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:48:44.869 ... 00:48:44.869 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:48:44.869 ... 00:48:44.869 fio-3.35 00:48:44.869 Starting 4 threads 00:48:51.453 00:48:51.453 filename0: (groupid=0, jobs=1): err= 0: pid=3433671: Mon Oct 7 14:59:14 2024 00:48:51.453 read: IOPS=1676, BW=13.1MiB/s (13.7MB/s)(65.5MiB/5004msec) 00:48:51.453 slat (nsec): min=5909, max=44178, avg=9306.27, stdev=3060.73 00:48:51.453 clat (usec): min=3016, max=45891, avg=4743.53, stdev=1436.95 00:48:51.453 lat (usec): min=3027, max=45923, avg=4752.84, stdev=1436.74 00:48:51.453 clat percentiles (usec): 00:48:51.453 | 1.00th=[ 3720], 5.00th=[ 4047], 10.00th=[ 4178], 20.00th=[ 4228], 00:48:51.453 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4686], 00:48:51.453 | 70.00th=[ 4948], 80.00th=[ 5145], 90.00th=[ 5211], 95.00th=[ 6587], 00:48:51.453 | 99.00th=[ 6849], 99.50th=[ 7046], 99.90th=[ 7504], 99.95th=[45876], 00:48:51.453 | 99.99th=[45876] 00:48:51.453 bw ( KiB/s): min=12464, max=13904, per=22.58%, avg=13400.89, stdev=451.71, samples=9 00:48:51.453 iops : min= 1558, max= 1738, avg=1675.11, stdev=56.46, samples=9 00:48:51.453 lat (msec) : 4=4.14%, 10=95.77%, 50=0.10% 00:48:51.453 cpu : usr=96.96%, sys=2.76%, ctx=5, majf=0, minf=1638 00:48:51.453 IO depths : 1=0.1%, 2=0.1%, 4=73.8%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:51.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:51.453 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:51.453 issued rwts: total=8389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:51.453 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:51.453 filename0: (groupid=0, jobs=1): err= 0: pid=3433672: Mon Oct 7 14:59:14 2024 00:48:51.453 read: IOPS=2289, BW=17.9MiB/s (18.8MB/s)(89.5MiB/5003msec) 00:48:51.453 slat (nsec): min=5994, max=91184, avg=10377.51, stdev=3215.15 00:48:51.453 clat (usec): min=1476, max=5535, avg=3457.95, stdev=346.70 00:48:51.453 lat (usec): min=1492, max=5561, avg=3468.33, stdev=346.74 00:48:51.453 clat percentiles (usec): 00:48:51.453 | 1.00th=[ 2540], 5.00th=[ 2966], 10.00th=[ 3130], 20.00th=[ 3228], 00:48:51.453 | 30.00th=[ 3261], 40.00th=[ 3326], 50.00th=[ 3458], 60.00th=[ 3523], 00:48:51.453 | 70.00th=[ 3556], 80.00th=[ 3654], 90.00th=[ 3851], 95.00th=[ 4080], 00:48:51.453 | 99.00th=[ 4424], 99.50th=[ 4686], 99.90th=[ 5407], 99.95th=[ 5473], 00:48:51.453 | 99.99th=[ 5538] 00:48:51.453 bw ( KiB/s): min=17888, max=18944, per=30.89%, avg=18330.67, stdev=343.72, samples=9 00:48:51.453 iops : min= 2236, max= 2368, avg=2291.33, stdev=42.97, samples=9 00:48:51.453 lat (msec) : 2=0.02%, 4=94.27%, 10=5.71% 00:48:51.453 cpu : usr=96.44%, sys=3.22%, ctx=10, majf=0, minf=1633 00:48:51.453 IO depths : 1=0.1%, 2=17.2%, 4=54.3%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:51.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:51.453 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:51.453 issued rwts: total=11454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:51.453 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:51.453 filename1: (groupid=0, jobs=1): err= 0: pid=3433673: Mon Oct 7 14:59:14 2024 00:48:51.453 read: IOPS=1697, BW=13.3MiB/s (13.9MB/s)(66.4MiB/5002msec) 00:48:51.453 slat (nsec): min=5922, max=41815, avg=8888.88, stdev=2452.12 00:48:51.453 clat (usec): min=1215, max=7129, avg=4685.24, stdev=643.12 00:48:51.453 lat (usec): min=1221, max=7139, avg=4694.13, stdev=642.83 00:48:51.453 clat percentiles (usec): 00:48:51.453 | 1.00th=[ 3589], 5.00th=[ 4015], 10.00th=[ 4146], 20.00th=[ 4293], 00:48:51.453 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4686], 00:48:51.453 | 70.00th=[ 4948], 80.00th=[ 5145], 90.00th=[ 5211], 95.00th=[ 6456], 00:48:51.453 | 99.00th=[ 6849], 99.50th=[ 6849], 99.90th=[ 7046], 99.95th=[ 7111], 00:48:51.453 | 99.99th=[ 7111] 00:48:51.453 bw ( KiB/s): min=13200, max=13840, per=22.91%, avg=13590.56, stdev=194.75, samples=9 00:48:51.453 iops : min= 1650, max= 1730, avg=1698.78, stdev=24.35, samples=9 00:48:51.453 lat (msec) : 2=0.06%, 4=4.06%, 10=95.88% 00:48:51.453 cpu : usr=96.94%, sys=2.76%, ctx=10, majf=0, minf=1634 00:48:51.453 IO depths : 1=0.1%, 2=0.3%, 4=73.6%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:51.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:51.453 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:51.453 issued rwts: total=8493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:51.453 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:51.453 filename1: (groupid=0, jobs=1): err= 0: pid=3433675: Mon Oct 7 14:59:14 2024 00:48:51.453 read: IOPS=1754, BW=13.7MiB/s (14.4MB/s)(68.6MiB/5003msec) 00:48:51.453 slat (nsec): min=5926, max=45371, avg=9918.89, stdev=3496.68 00:48:51.453 clat (usec): min=671, max=6564, avg=4536.85, stdev=449.78 00:48:51.453 lat (usec): min=683, max=6570, avg=4546.77, stdev=449.24 00:48:51.453 clat percentiles (usec): 00:48:51.453 | 1.00th=[ 3326], 5.00th=[ 3949], 10.00th=[ 4080], 20.00th=[ 4293], 00:48:51.453 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4490], 00:48:51.453 | 70.00th=[ 4817], 80.00th=[ 4948], 90.00th=[ 5145], 95.00th=[ 5211], 00:48:51.453 | 99.00th=[ 5538], 99.50th=[ 5866], 99.90th=[ 6325], 99.95th=[ 6390], 00:48:51.453 | 99.99th=[ 6587] 00:48:51.453 bw ( KiB/s): min=13776, max=14845, per=23.66%, avg=14039.70, stdev=331.08, samples=10 00:48:51.453 iops : min= 1722, max= 1855, avg=1754.90, stdev=41.22, samples=10 00:48:51.453 lat (usec) : 750=0.01% 00:48:51.453 lat (msec) : 2=0.01%, 4=6.15%, 10=93.82% 00:48:51.453 cpu : usr=91.68%, sys=5.26%, ctx=388, majf=0, minf=1631 00:48:51.453 IO depths : 1=0.1%, 2=0.5%, 4=64.2%, 8=35.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:51.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:51.453 complete : 0=0.0%, 4=98.4%, 8=1.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:51.453 issued rwts: total=8776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:51.453 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:51.453 00:48:51.453 Run status group 0 (all jobs): 00:48:51.453 READ: bw=57.9MiB/s (60.8MB/s), 13.1MiB/s-17.9MiB/s (13.7MB/s-18.8MB/s), io=290MiB (304MB), run=5002-5004msec 00:48:51.453 ----------------------------------------------------- 00:48:51.453 Suppressions used: 00:48:51.453 count bytes template 00:48:51.453 6 52 /usr/src/fio/parse.c 00:48:51.453 1 8 libtcmalloc_minimal.so 00:48:51.453 1 904 libcrypto.so 00:48:51.453 ----------------------------------------------------- 00:48:51.453 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:51.715 14:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:51.716 00:48:51.716 real 0m27.587s 00:48:51.716 user 5m20.232s 00:48:51.716 sys 0m5.413s 00:48:51.716 14:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:48:51.716 14:59:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:48:51.716 ************************************ 00:48:51.716 END TEST fio_dif_rand_params 00:48:51.716 ************************************ 00:48:51.716 14:59:15 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:48:51.716 14:59:15 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:48:51.716 14:59:15 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:48:51.716 14:59:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:48:51.716 ************************************ 00:48:51.716 START TEST fio_dif_digest 00:48:51.716 ************************************ 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:51.716 bdev_null0 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:48:51.716 [2024-10-07 14:59:15.351250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:48:51.716 { 00:48:51.716 "params": { 00:48:51.716 "name": "Nvme$subsystem", 00:48:51.716 "trtype": "$TEST_TRANSPORT", 00:48:51.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:48:51.716 "adrfam": "ipv4", 00:48:51.716 "trsvcid": "$NVMF_PORT", 00:48:51.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:48:51.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:48:51.716 "hdgst": ${hdgst:-false}, 00:48:51.716 "ddgst": ${ddgst:-false} 00:48:51.716 }, 00:48:51.716 "method": "bdev_nvme_attach_controller" 00:48:51.716 } 00:48:51.716 EOF 00:48:51.716 )") 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:48:51.716 "params": { 00:48:51.716 "name": "Nvme0", 00:48:51.716 "trtype": "tcp", 00:48:51.716 "traddr": "10.0.0.2", 00:48:51.716 "adrfam": "ipv4", 00:48:51.716 "trsvcid": "4420", 00:48:51.716 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:48:51.716 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:48:51.716 "hdgst": true, 00:48:51.716 "ddgst": true 00:48:51.716 }, 00:48:51.716 "method": "bdev_nvme_attach_controller" 00:48:51.716 }' 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:48:51.716 14:59:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:48:52.334 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:48:52.334 ... 00:48:52.334 fio-3.35 00:48:52.334 Starting 3 threads 00:49:04.562 00:49:04.562 filename0: (groupid=0, jobs=1): err= 0: pid=3435222: Mon Oct 7 14:59:26 2024 00:49:04.562 read: IOPS=194, BW=24.3MiB/s (25.5MB/s)(244MiB/10047msec) 00:49:04.562 slat (nsec): min=6608, max=47108, avg=11190.40, stdev=1942.55 00:49:04.562 clat (usec): min=8880, max=57180, avg=15404.79, stdev=3359.36 00:49:04.562 lat (usec): min=8892, max=57190, avg=15415.98, stdev=3359.36 00:49:04.562 clat percentiles (usec): 00:49:04.562 | 1.00th=[10028], 5.00th=[10945], 10.00th=[11469], 20.00th=[13698], 00:49:04.562 | 30.00th=[14877], 40.00th=[15401], 50.00th=[15795], 60.00th=[16057], 00:49:04.562 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17433], 95.00th=[17957], 00:49:04.562 | 99.00th=[19268], 99.50th=[20055], 99.90th=[56886], 99.95th=[57410], 00:49:04.562 | 99.99th=[57410] 00:49:04.562 bw ( KiB/s): min=22528, max=27392, per=34.25%, avg=24960.00, stdev=1408.31, samples=20 00:49:04.562 iops : min= 176, max= 214, avg=195.00, stdev=11.00, samples=20 00:49:04.562 lat (msec) : 10=0.77%, 20=98.72%, 50=0.10%, 100=0.41% 00:49:04.562 cpu : usr=94.93%, sys=4.80%, ctx=23, majf=0, minf=1635 00:49:04.562 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:49:04.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:04.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:04.562 issued rwts: total=1952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:04.562 latency : target=0, window=0, percentile=100.00%, depth=3 00:49:04.562 filename0: (groupid=0, jobs=1): err= 0: pid=3435223: Mon Oct 7 14:59:26 2024 00:49:04.562 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(257MiB/10048msec) 00:49:04.562 slat (nsec): min=6451, max=45492, avg=10321.89, stdev=1316.06 00:49:04.562 clat (usec): min=8918, max=52118, avg=14626.95, stdev=2272.13 00:49:04.562 lat (usec): min=8928, max=52128, avg=14637.27, stdev=2272.14 00:49:04.562 clat percentiles (usec): 00:49:04.562 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[11076], 20.00th=[13173], 00:49:04.562 | 30.00th=[14222], 40.00th=[14746], 50.00th=[15139], 60.00th=[15401], 00:49:04.562 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16581], 95.00th=[17171], 00:49:04.562 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19792], 99.95th=[47973], 00:49:04.562 | 99.99th=[52167] 00:49:04.562 bw ( KiB/s): min=24832, max=28160, per=36.16%, avg=26354.53, stdev=855.35, samples=19 00:49:04.562 iops : min= 194, max= 220, avg=205.89, stdev= 6.68, samples=19 00:49:04.562 lat (msec) : 10=1.51%, 20=98.39%, 50=0.05%, 100=0.05% 00:49:04.562 cpu : usr=94.13%, sys=5.38%, ctx=383, majf=0, minf=1635 00:49:04.562 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:49:04.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:04.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:04.562 issued rwts: total=2056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:04.562 latency : target=0, window=0, percentile=100.00%, depth=3 00:49:04.562 filename0: (groupid=0, jobs=1): err= 0: pid=3435225: Mon Oct 7 14:59:26 2024 00:49:04.562 read: IOPS=171, BW=21.4MiB/s (22.4MB/s)(214MiB/10007msec) 00:49:04.562 slat (nsec): min=6422, max=47443, avg=11130.31, stdev=1958.08 00:49:04.562 clat (usec): min=9888, max=97869, avg=17510.89, stdev=9726.79 00:49:04.562 lat (usec): min=9898, max=97880, avg=17522.02, stdev=9726.86 00:49:04.562 clat percentiles (usec): 00:49:04.562 | 1.00th=[11338], 5.00th=[13435], 10.00th=[13829], 20.00th=[14353], 00:49:04.562 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15270], 60.00th=[15533], 00:49:04.562 | 70.00th=[15795], 80.00th=[16188], 90.00th=[16909], 95.00th=[54264], 00:49:04.562 | 99.00th=[56361], 99.50th=[57934], 99.90th=[58459], 99.95th=[98042], 00:49:04.562 | 99.99th=[98042] 00:49:04.562 bw ( KiB/s): min=17152, max=25600, per=29.91%, avg=21800.42, stdev=2608.27, samples=19 00:49:04.562 iops : min= 134, max= 200, avg=170.32, stdev=20.38, samples=19 00:49:04.562 lat (msec) : 10=0.12%, 20=93.99%, 100=5.90% 00:49:04.562 cpu : usr=94.94%, sys=4.80%, ctx=17, majf=0, minf=1636 00:49:04.562 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:49:04.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:04.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:04.562 issued rwts: total=1713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:04.562 latency : target=0, window=0, percentile=100.00%, depth=3 00:49:04.562 00:49:04.562 Run status group 0 (all jobs): 00:49:04.562 READ: bw=71.2MiB/s (74.6MB/s), 21.4MiB/s-25.6MiB/s (22.4MB/s-26.8MB/s), io=715MiB (750MB), run=10007-10048msec 00:49:04.562 ----------------------------------------------------- 00:49:04.562 Suppressions used: 00:49:04.562 count bytes template 00:49:04.562 5 44 /usr/src/fio/parse.c 00:49:04.562 1 8 libtcmalloc_minimal.so 00:49:04.562 1 904 libcrypto.so 00:49:04.562 ----------------------------------------------------- 00:49:04.562 00:49:04.562 14:59:27 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:49:04.562 14:59:27 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:49:04.562 14:59:27 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:49:04.562 14:59:27 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:49:04.562 14:59:27 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:49:04.562 14:59:27 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:49:04.562 14:59:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:04.562 14:59:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:49:04.562 14:59:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:04.562 14:59:27 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:49:04.562 14:59:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:04.562 14:59:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:49:04.562 14:59:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:04.562 00:49:04.562 real 0m12.309s 00:49:04.562 user 0m41.424s 00:49:04.562 sys 0m2.089s 00:49:04.562 14:59:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:49:04.562 14:59:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:49:04.562 ************************************ 00:49:04.562 END TEST fio_dif_digest 00:49:04.562 ************************************ 00:49:04.562 14:59:27 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:49:04.562 14:59:27 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:49:04.562 14:59:27 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:49:04.562 14:59:27 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:49:04.562 14:59:27 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:04.562 14:59:27 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:49:04.562 14:59:27 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:04.562 14:59:27 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:04.562 rmmod nvme_tcp 00:49:04.562 rmmod nvme_fabrics 00:49:04.562 rmmod nvme_keyring 00:49:04.562 14:59:27 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:04.562 14:59:27 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:49:04.562 14:59:27 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:49:04.562 14:59:27 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 3423970 ']' 00:49:04.562 14:59:27 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 3423970 00:49:04.562 14:59:27 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 3423970 ']' 00:49:04.562 14:59:27 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 3423970 00:49:04.562 14:59:27 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:49:04.562 14:59:27 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:49:04.562 14:59:27 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3423970 00:49:04.562 14:59:27 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:49:04.562 14:59:27 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:49:04.562 14:59:27 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3423970' 00:49:04.562 killing process with pid 3423970 00:49:04.562 14:59:27 nvmf_dif -- common/autotest_common.sh@969 -- # kill 3423970 00:49:04.562 14:59:27 nvmf_dif -- common/autotest_common.sh@974 -- # wait 3423970 00:49:05.132 14:59:28 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:49:05.132 14:59:28 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:49:08.431 Waiting for block devices as requested 00:49:08.431 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:49:08.431 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:49:08.431 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:49:08.691 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:49:08.691 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:49:08.691 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:49:08.951 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:49:08.951 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:49:08.951 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:49:09.211 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:49:09.211 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:49:09.211 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:49:09.471 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:49:09.471 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:49:09.471 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:49:09.471 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:49:09.731 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:49:09.992 14:59:33 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:49:09.992 14:59:33 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:49:09.992 14:59:33 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:49:09.992 14:59:33 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:49:09.992 14:59:33 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:49:09.992 14:59:33 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:49:09.992 14:59:33 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:49:09.992 14:59:33 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:49:09.992 14:59:33 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:09.992 14:59:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:49:09.992 14:59:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:12.538 14:59:35 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:49:12.538 00:49:12.538 real 1m24.542s 00:49:12.538 user 8m10.839s 00:49:12.538 sys 0m22.589s 00:49:12.538 14:59:35 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:49:12.538 14:59:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:49:12.538 ************************************ 00:49:12.538 END TEST nvmf_dif 00:49:12.538 ************************************ 00:49:12.538 14:59:35 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:49:12.538 14:59:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:49:12.538 14:59:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:49:12.538 14:59:35 -- common/autotest_common.sh@10 -- # set +x 00:49:12.538 ************************************ 00:49:12.538 START TEST nvmf_abort_qd_sizes 00:49:12.538 ************************************ 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:49:12.538 * Looking for test storage... 00:49:12.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:49:12.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:12.538 --rc genhtml_branch_coverage=1 00:49:12.538 --rc genhtml_function_coverage=1 00:49:12.538 --rc genhtml_legend=1 00:49:12.538 --rc geninfo_all_blocks=1 00:49:12.538 --rc geninfo_unexecuted_blocks=1 00:49:12.538 00:49:12.538 ' 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:49:12.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:12.538 --rc genhtml_branch_coverage=1 00:49:12.538 --rc genhtml_function_coverage=1 00:49:12.538 --rc genhtml_legend=1 00:49:12.538 --rc geninfo_all_blocks=1 00:49:12.538 --rc geninfo_unexecuted_blocks=1 00:49:12.538 00:49:12.538 ' 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:49:12.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:12.538 --rc genhtml_branch_coverage=1 00:49:12.538 --rc genhtml_function_coverage=1 00:49:12.538 --rc genhtml_legend=1 00:49:12.538 --rc geninfo_all_blocks=1 00:49:12.538 --rc geninfo_unexecuted_blocks=1 00:49:12.538 00:49:12.538 ' 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:49:12.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:49:12.538 --rc genhtml_branch_coverage=1 00:49:12.538 --rc genhtml_function_coverage=1 00:49:12.538 --rc genhtml_legend=1 00:49:12.538 --rc geninfo_all_blocks=1 00:49:12.538 --rc geninfo_unexecuted_blocks=1 00:49:12.538 00:49:12.538 ' 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:12.538 14:59:35 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:49:12.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:49:12.539 14:59:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:49:20.683 Found 0000:31:00.0 (0x8086 - 0x159b) 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:49:20.683 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:49:20.683 Found 0000:31:00.1 (0x8086 - 0x159b) 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:49:20.684 Found net devices under 0000:31:00.0: cvl_0_0 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:49:20.684 Found net devices under 0000:31:00.1: cvl_0_1 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:49:20.684 14:59:42 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:49:20.684 14:59:43 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:49:20.684 14:59:43 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:49:20.684 14:59:43 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:49:20.684 14:59:43 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:49:20.684 14:59:43 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:49:20.684 14:59:43 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:49:20.684 14:59:43 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:49:20.684 14:59:43 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:49:20.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:20.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:49:20.684 00:49:20.684 --- 10.0.0.2 ping statistics --- 00:49:20.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:20.684 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:49:20.684 14:59:43 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:49:20.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:20.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:49:20.684 00:49:20.684 --- 10.0.0.1 ping statistics --- 00:49:20.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:20.684 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:49:20.684 14:59:43 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:20.684 14:59:43 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:49:20.684 14:59:43 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:49:20.684 14:59:43 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:49:22.600 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:49:22.600 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:49:22.600 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:49:22.600 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:49:22.600 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:49:22.600 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:49:22.600 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:49:22.600 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:49:22.600 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:49:22.600 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:49:22.600 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:49:22.600 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:49:22.600 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:49:22.600 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:49:22.600 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:49:22.600 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:49:22.600 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=3444869 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 3444869 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 3444869 ']' 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:22.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:49:22.861 14:59:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:23.122 [2024-10-07 14:59:46.593554] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:49:23.122 [2024-10-07 14:59:46.593657] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:23.122 [2024-10-07 14:59:46.717810] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:49:23.384 [2024-10-07 14:59:46.902129] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:23.384 [2024-10-07 14:59:46.902172] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:23.384 [2024-10-07 14:59:46.902184] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:23.384 [2024-10-07 14:59:46.902195] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:23.384 [2024-10-07 14:59:46.902204] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:23.384 [2024-10-07 14:59:46.904430] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:49:23.384 [2024-10-07 14:59:46.904512] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:49:23.384 [2024-10-07 14:59:46.904630] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:49:23.384 [2024-10-07 14:59:46.904651] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:49:23.955 14:59:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:23.955 ************************************ 00:49:23.955 START TEST spdk_target_abort 00:49:23.955 ************************************ 00:49:23.955 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:49:23.955 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:49:23.955 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:49:23.955 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:23.955 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:24.216 spdk_targetn1 00:49:24.216 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:24.216 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:49:24.216 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:24.216 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:24.216 [2024-10-07 14:59:47.802943] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:24.216 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:24.216 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:49:24.216 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:24.216 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:24.216 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:24.216 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:49:24.216 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:24.216 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:24.216 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:24.217 [2024-10-07 14:59:47.843385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:24.217 14:59:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:24.477 [2024-10-07 14:59:48.048639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1816 len:8 PRP1 0x2000078c5000 PRP2 0x0 00:49:24.477 [2024-10-07 14:59:48.048683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00e4 p:1 m:0 dnr:0 00:49:24.477 [2024-10-07 14:59:48.055734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2128 len:8 PRP1 0x2000078bf000 PRP2 0x0 00:49:24.477 [2024-10-07 14:59:48.055758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:49:24.477 [2024-10-07 14:59:48.095969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3992 len:8 PRP1 0x2000078c5000 PRP2 0x0 00:49:24.477 [2024-10-07 14:59:48.095994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00f6 p:0 m:0 dnr:0 00:49:27.779 Initializing NVMe Controllers 00:49:27.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:49:27.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:27.779 Initialization complete. Launching workers. 00:49:27.779 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15711, failed: 3 00:49:27.779 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3636, failed to submit 12078 00:49:27.779 success 696, unsuccessful 2940, failed 0 00:49:27.779 14:59:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:27.779 14:59:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:27.779 [2024-10-07 14:59:51.340338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:888 len:8 PRP1 0x200007c51000 PRP2 0x0 00:49:27.779 [2024-10-07 14:59:51.340391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:49:27.779 [2024-10-07 14:59:51.356309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:1168 len:8 PRP1 0x200007c47000 PRP2 0x0 00:49:27.779 [2024-10-07 14:59:51.356342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:009c p:1 m:0 dnr:0 00:49:27.779 [2024-10-07 14:59:51.433733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:3056 len:8 PRP1 0x200007c55000 PRP2 0x0 00:49:27.779 [2024-10-07 14:59:51.433766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:0089 p:0 m:0 dnr:0 00:49:28.350 [2024-10-07 14:59:51.991250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:16096 len:8 PRP1 0x200007c5b000 PRP2 0x0 00:49:28.350 [2024-10-07 14:59:51.991291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:00de p:0 m:0 dnr:0 00:49:30.898 Initializing NVMe Controllers 00:49:30.898 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:49:30.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:30.898 Initialization complete. Launching workers. 00:49:30.898 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8683, failed: 4 00:49:30.898 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1221, failed to submit 7466 00:49:30.898 success 303, unsuccessful 918, failed 0 00:49:30.898 14:59:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:30.898 14:59:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:32.812 [2024-10-07 14:59:56.304238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:142 nsid:1 lba:166984 len:8 PRP1 0x2000078f1000 PRP2 0x0 00:49:32.812 [2024-10-07 14:59:56.304286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:142 cdw0:0 sqhd:00ec p:1 m:0 dnr:0 00:49:34.198 Initializing NVMe Controllers 00:49:34.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:49:34.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:34.198 Initialization complete. Launching workers. 00:49:34.198 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38561, failed: 1 00:49:34.198 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2485, failed to submit 36077 00:49:34.198 success 562, unsuccessful 1923, failed 0 00:49:34.198 14:59:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:49:34.198 14:59:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:34.198 14:59:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:34.198 14:59:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:34.198 14:59:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:49:34.198 14:59:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:49:34.198 14:59:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:36.111 14:59:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:49:36.111 14:59:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3444869 00:49:36.111 14:59:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 3444869 ']' 00:49:36.111 14:59:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 3444869 00:49:36.111 14:59:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:49:36.111 14:59:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:49:36.111 14:59:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3444869 00:49:36.111 14:59:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:49:36.111 14:59:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:49:36.111 14:59:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3444869' 00:49:36.111 killing process with pid 3444869 00:49:36.111 14:59:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 3444869 00:49:36.111 14:59:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 3444869 00:49:37.053 00:49:37.053 real 0m12.947s 00:49:37.053 user 0m51.229s 00:49:37.053 sys 0m2.021s 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:37.053 ************************************ 00:49:37.053 END TEST spdk_target_abort 00:49:37.053 ************************************ 00:49:37.053 15:00:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:49:37.053 15:00:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:49:37.053 15:00:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:49:37.053 15:00:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:49:37.053 ************************************ 00:49:37.053 START TEST kernel_target_abort 00:49:37.053 ************************************ 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:49:37.053 15:00:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:49:40.353 Waiting for block devices as requested 00:49:40.353 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:49:40.353 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:49:40.353 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:49:40.353 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:49:40.615 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:49:40.615 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:49:40.615 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:49:40.875 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:49:40.875 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:49:41.136 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:49:41.136 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:49:41.136 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:49:41.136 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:49:41.397 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:49:41.397 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:49:41.397 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:49:41.397 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:49:42.340 15:00:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:49:42.340 15:00:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:49:42.340 15:00:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:49:42.340 15:00:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:49:42.340 15:00:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:49:42.340 15:00:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:49:42.340 15:00:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:49:42.340 15:00:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:49:42.340 15:00:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:49:42.340 No valid GPT data, bailing 00:49:42.340 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:49:42.601 00:49:42.601 Discovery Log Number of Records 2, Generation counter 2 00:49:42.601 =====Discovery Log Entry 0====== 00:49:42.601 trtype: tcp 00:49:42.601 adrfam: ipv4 00:49:42.601 subtype: current discovery subsystem 00:49:42.601 treq: not specified, sq flow control disable supported 00:49:42.601 portid: 1 00:49:42.601 trsvcid: 4420 00:49:42.601 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:49:42.601 traddr: 10.0.0.1 00:49:42.601 eflags: none 00:49:42.601 sectype: none 00:49:42.601 =====Discovery Log Entry 1====== 00:49:42.601 trtype: tcp 00:49:42.601 adrfam: ipv4 00:49:42.601 subtype: nvme subsystem 00:49:42.601 treq: not specified, sq flow control disable supported 00:49:42.601 portid: 1 00:49:42.601 trsvcid: 4420 00:49:42.601 subnqn: nqn.2016-06.io.spdk:testnqn 00:49:42.601 traddr: 10.0.0.1 00:49:42.601 eflags: none 00:49:42.601 sectype: none 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:49:42.601 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:49:42.602 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:49:42.602 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:49:42.602 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:49:42.602 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:49:42.602 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:49:42.602 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:49:42.602 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:42.602 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:49:42.602 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:42.602 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:49:42.602 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:42.602 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:49:42.602 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:42.602 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:49:42.602 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:49:42.602 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:42.602 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:42.602 15:00:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:45.901 Initializing NVMe Controllers 00:49:45.901 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:49:45.901 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:45.901 Initialization complete. Launching workers. 00:49:45.901 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 60448, failed: 0 00:49:45.901 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 60448, failed to submit 0 00:49:45.901 success 0, unsuccessful 60448, failed 0 00:49:45.901 15:00:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:45.901 15:00:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:49.203 Initializing NVMe Controllers 00:49:49.203 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:49:49.203 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:49.203 Initialization complete. Launching workers. 00:49:49.203 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 97049, failed: 0 00:49:49.203 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24454, failed to submit 72595 00:49:49.203 success 0, unsuccessful 24454, failed 0 00:49:49.203 15:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:49:49.203 15:00:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:49:52.504 Initializing NVMe Controllers 00:49:52.504 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:49:52.504 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:49:52.504 Initialization complete. Launching workers. 00:49:52.504 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 91796, failed: 0 00:49:52.504 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22954, failed to submit 68842 00:49:52.504 success 0, unsuccessful 22954, failed 0 00:49:52.504 15:00:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:49:52.504 15:00:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:49:52.504 15:00:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:49:52.504 15:00:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:52.504 15:00:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:49:52.504 15:00:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:49:52.504 15:00:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:49:52.504 15:00:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:49:52.504 15:00:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:49:52.504 15:00:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:49:55.807 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:49:55.807 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:49:55.807 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:49:55.807 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:49:55.807 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:49:55.807 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:49:55.807 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:49:55.807 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:49:55.807 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:49:55.807 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:49:55.807 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:49:55.807 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:49:55.807 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:49:55.807 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:49:55.807 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:49:55.807 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:49:57.715 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:49:57.715 00:49:57.715 real 0m20.860s 00:49:57.715 user 0m10.123s 00:49:57.715 sys 0m6.601s 00:49:57.715 15:00:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:49:57.715 15:00:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:49:57.715 ************************************ 00:49:57.715 END TEST kernel_target_abort 00:49:57.715 ************************************ 00:49:57.715 15:00:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:49:57.715 15:00:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:49:57.715 15:00:21 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:49:57.715 15:00:21 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:49:57.715 15:00:21 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:49:57.715 15:00:21 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:49:57.715 15:00:21 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:49:57.715 15:00:21 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:49:57.715 rmmod nvme_tcp 00:49:57.715 rmmod nvme_fabrics 00:49:57.976 rmmod nvme_keyring 00:49:57.976 15:00:21 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:49:57.976 15:00:21 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:49:57.976 15:00:21 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:49:57.976 15:00:21 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 3444869 ']' 00:49:57.976 15:00:21 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 3444869 00:49:57.976 15:00:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 3444869 ']' 00:49:57.976 15:00:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 3444869 00:49:57.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3444869) - No such process 00:49:57.976 15:00:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 3444869 is not found' 00:49:57.976 Process with pid 3444869 is not found 00:49:57.976 15:00:21 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:49:57.976 15:00:21 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:50:01.273 Waiting for block devices as requested 00:50:01.273 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:50:01.273 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:50:01.273 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:50:01.273 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:50:01.533 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:50:01.533 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:50:01.533 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:50:01.794 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:50:01.794 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:50:02.055 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:50:02.055 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:50:02.055 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:50:02.055 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:50:02.317 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:50:02.317 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:50:02.317 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:50:02.317 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:50:02.888 15:00:26 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:50:02.888 15:00:26 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:50:02.888 15:00:26 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:50:02.888 15:00:26 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:50:02.888 15:00:26 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:50:02.888 15:00:26 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:50:02.888 15:00:26 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:50:02.888 15:00:26 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:50:02.888 15:00:26 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:02.888 15:00:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:50:02.888 15:00:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:04.867 15:00:28 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:50:04.867 00:50:04.867 real 0m52.703s 00:50:04.867 user 1m6.359s 00:50:04.867 sys 0m18.948s 00:50:04.867 15:00:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:50:04.867 15:00:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:50:04.867 ************************************ 00:50:04.867 END TEST nvmf_abort_qd_sizes 00:50:04.867 ************************************ 00:50:04.867 15:00:28 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:50:04.867 15:00:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:50:04.867 15:00:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:50:04.867 15:00:28 -- common/autotest_common.sh@10 -- # set +x 00:50:04.867 ************************************ 00:50:04.867 START TEST keyring_file 00:50:04.867 ************************************ 00:50:04.867 15:00:28 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:50:05.137 * Looking for test storage... 00:50:05.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:50:05.137 15:00:28 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:50:05.137 15:00:28 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:50:05.137 15:00:28 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:50:05.137 15:00:28 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@345 -- # : 1 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@353 -- # local d=1 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@355 -- # echo 1 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@353 -- # local d=2 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@355 -- # echo 2 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:50:05.137 15:00:28 keyring_file -- scripts/common.sh@368 -- # return 0 00:50:05.137 15:00:28 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:50:05.137 15:00:28 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:50:05.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:05.137 --rc genhtml_branch_coverage=1 00:50:05.137 --rc genhtml_function_coverage=1 00:50:05.137 --rc genhtml_legend=1 00:50:05.137 --rc geninfo_all_blocks=1 00:50:05.137 --rc geninfo_unexecuted_blocks=1 00:50:05.137 00:50:05.137 ' 00:50:05.137 15:00:28 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:50:05.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:05.137 --rc genhtml_branch_coverage=1 00:50:05.137 --rc genhtml_function_coverage=1 00:50:05.137 --rc genhtml_legend=1 00:50:05.137 --rc geninfo_all_blocks=1 00:50:05.137 --rc geninfo_unexecuted_blocks=1 00:50:05.137 00:50:05.137 ' 00:50:05.137 15:00:28 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:50:05.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:05.137 --rc genhtml_branch_coverage=1 00:50:05.137 --rc genhtml_function_coverage=1 00:50:05.137 --rc genhtml_legend=1 00:50:05.137 --rc geninfo_all_blocks=1 00:50:05.137 --rc geninfo_unexecuted_blocks=1 00:50:05.137 00:50:05.137 ' 00:50:05.137 15:00:28 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:50:05.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:05.137 --rc genhtml_branch_coverage=1 00:50:05.137 --rc genhtml_function_coverage=1 00:50:05.137 --rc genhtml_legend=1 00:50:05.137 --rc geninfo_all_blocks=1 00:50:05.137 --rc geninfo_unexecuted_blocks=1 00:50:05.137 00:50:05.137 ' 00:50:05.137 15:00:28 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:50:05.137 15:00:28 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:50:05.137 15:00:28 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:50:05.137 15:00:28 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:05.137 15:00:28 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:05.137 15:00:28 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:05.137 15:00:28 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:05.137 15:00:28 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:05.137 15:00:28 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:05.137 15:00:28 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:05.137 15:00:28 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:05.137 15:00:28 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:05.137 15:00:28 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:05.137 15:00:28 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:50:05.137 15:00:28 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:50:05.137 15:00:28 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:05.137 15:00:28 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:05.137 15:00:28 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:50:05.137 15:00:28 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:50:05.138 15:00:28 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:50:05.138 15:00:28 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:05.138 15:00:28 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:05.138 15:00:28 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:05.138 15:00:28 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:05.138 15:00:28 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:05.138 15:00:28 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:05.138 15:00:28 keyring_file -- paths/export.sh@5 -- # export PATH 00:50:05.138 15:00:28 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@51 -- # : 0 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:50:05.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:50:05.138 15:00:28 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:50:05.138 15:00:28 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:50:05.138 15:00:28 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:50:05.138 15:00:28 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:50:05.138 15:00:28 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:50:05.138 15:00:28 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:50:05.138 15:00:28 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:50:05.138 15:00:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:50:05.138 15:00:28 keyring_file -- keyring/common.sh@17 -- # name=key0 00:50:05.138 15:00:28 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:50:05.138 15:00:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:50:05.138 15:00:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:50:05.138 15:00:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bcxaG4PE9N 00:50:05.138 15:00:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@731 -- # python - 00:50:05.138 15:00:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bcxaG4PE9N 00:50:05.138 15:00:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bcxaG4PE9N 00:50:05.138 15:00:28 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.bcxaG4PE9N 00:50:05.138 15:00:28 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:50:05.138 15:00:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:50:05.138 15:00:28 keyring_file -- keyring/common.sh@17 -- # name=key1 00:50:05.138 15:00:28 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:50:05.138 15:00:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:50:05.138 15:00:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:50:05.138 15:00:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.C67qDR9hWc 00:50:05.138 15:00:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:50:05.138 15:00:28 keyring_file -- nvmf/common.sh@731 -- # python - 00:50:05.419 15:00:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.C67qDR9hWc 00:50:05.419 15:00:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.C67qDR9hWc 00:50:05.419 15:00:28 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.C67qDR9hWc 00:50:05.419 15:00:28 keyring_file -- keyring/file.sh@30 -- # tgtpid=3456105 00:50:05.419 15:00:28 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3456105 00:50:05.419 15:00:28 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:50:05.420 15:00:28 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3456105 ']' 00:50:05.420 15:00:28 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:05.420 15:00:28 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:50:05.420 15:00:28 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:05.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:05.420 15:00:28 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:50:05.420 15:00:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:50:05.420 [2024-10-07 15:00:28.960097] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:50:05.420 [2024-10-07 15:00:28.960244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3456105 ] 00:50:05.420 [2024-10-07 15:00:29.094775] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:05.700 [2024-10-07 15:00:29.278195] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:50:06.309 15:00:29 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:50:06.310 15:00:29 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:50:06.310 [2024-10-07 15:00:29.924244] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:06.310 null0 00:50:06.310 [2024-10-07 15:00:29.956297] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:50:06.310 [2024-10-07 15:00:29.956722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:50:06.310 15:00:29 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:50:06.310 [2024-10-07 15:00:29.988340] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:50:06.310 request: 00:50:06.310 { 00:50:06.310 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:50:06.310 "secure_channel": false, 00:50:06.310 "listen_address": { 00:50:06.310 "trtype": "tcp", 00:50:06.310 "traddr": "127.0.0.1", 00:50:06.310 "trsvcid": "4420" 00:50:06.310 }, 00:50:06.310 "method": "nvmf_subsystem_add_listener", 00:50:06.310 "req_id": 1 00:50:06.310 } 00:50:06.310 Got JSON-RPC error response 00:50:06.310 response: 00:50:06.310 { 00:50:06.310 "code": -32602, 00:50:06.310 "message": "Invalid parameters" 00:50:06.310 } 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:50:06.310 15:00:29 keyring_file -- keyring/file.sh@47 -- # bperfpid=3456263 00:50:06.310 15:00:29 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3456263 /var/tmp/bperf.sock 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3456263 ']' 00:50:06.310 15:00:29 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:50:06.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:50:06.310 15:00:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:50:06.590 [2024-10-07 15:00:30.074069] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:50:06.590 [2024-10-07 15:00:30.074179] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3456263 ] 00:50:06.590 [2024-10-07 15:00:30.206192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:06.864 [2024-10-07 15:00:30.385276] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:50:07.124 15:00:30 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:50:07.124 15:00:30 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:50:07.124 15:00:30 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bcxaG4PE9N 00:50:07.124 15:00:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bcxaG4PE9N 00:50:07.384 15:00:30 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.C67qDR9hWc 00:50:07.384 15:00:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.C67qDR9hWc 00:50:07.644 15:00:31 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:50:07.644 15:00:31 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:50:07.644 15:00:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:07.644 15:00:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:07.644 15:00:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:50:07.644 15:00:31 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.bcxaG4PE9N == \/\t\m\p\/\t\m\p\.\b\c\x\a\G\4\P\E\9\N ]] 00:50:07.644 15:00:31 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:50:07.644 15:00:31 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:50:07.644 15:00:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:50:07.644 15:00:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:07.644 15:00:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:07.904 15:00:31 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.C67qDR9hWc == \/\t\m\p\/\t\m\p\.\C\6\7\q\D\R\9\h\W\c ]] 00:50:07.904 15:00:31 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:50:07.904 15:00:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:50:07.904 15:00:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:07.904 15:00:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:07.904 15:00:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:07.904 15:00:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:50:08.165 15:00:31 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:50:08.165 15:00:31 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:50:08.165 15:00:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:08.165 15:00:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:50:08.165 15:00:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:08.165 15:00:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:08.165 15:00:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:50:08.165 15:00:31 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:50:08.165 15:00:31 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:50:08.165 15:00:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:50:08.426 [2024-10-07 15:00:32.003818] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:50:08.426 nvme0n1 00:50:08.426 15:00:32 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:50:08.426 15:00:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:50:08.426 15:00:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:08.426 15:00:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:08.426 15:00:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:08.426 15:00:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:50:08.686 15:00:32 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:50:08.686 15:00:32 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:50:08.686 15:00:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:50:08.686 15:00:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:08.686 15:00:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:08.686 15:00:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:50:08.686 15:00:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:08.947 15:00:32 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:50:08.947 15:00:32 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:50:08.947 Running I/O for 1 seconds... 00:50:09.888 12508.00 IOPS, 48.86 MiB/s 00:50:09.888 Latency(us) 00:50:09.888 [2024-10-07T13:00:33.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:09.888 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:50:09.888 nvme0n1 : 1.01 12551.82 49.03 0.00 0.00 10170.96 4532.91 16602.45 00:50:09.888 [2024-10-07T13:00:33.597Z] =================================================================================================================== 00:50:09.888 [2024-10-07T13:00:33.597Z] Total : 12551.82 49.03 0.00 0.00 10170.96 4532.91 16602.45 00:50:09.888 { 00:50:09.888 "results": [ 00:50:09.888 { 00:50:09.888 "job": "nvme0n1", 00:50:09.888 "core_mask": "0x2", 00:50:09.888 "workload": "randrw", 00:50:09.888 "percentage": 50, 00:50:09.888 "status": "finished", 00:50:09.888 "queue_depth": 128, 00:50:09.888 "io_size": 4096, 00:50:09.888 "runtime": 1.006786, 00:50:09.888 "iops": 12551.823326903632, 00:50:09.888 "mibps": 49.03055987071731, 00:50:09.888 "io_failed": 0, 00:50:09.888 "io_timeout": 0, 00:50:09.888 "avg_latency_us": 10170.958668460342, 00:50:09.888 "min_latency_us": 4532.906666666667, 00:50:09.888 "max_latency_us": 16602.453333333335 00:50:09.888 } 00:50:09.888 ], 00:50:09.888 "core_count": 1 00:50:09.888 } 00:50:09.888 15:00:33 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:50:09.888 15:00:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:50:10.148 15:00:33 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:50:10.148 15:00:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:50:10.148 15:00:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:10.148 15:00:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:10.148 15:00:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:50:10.148 15:00:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:10.410 15:00:33 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:50:10.410 15:00:33 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:50:10.410 15:00:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:50:10.410 15:00:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:10.410 15:00:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:10.410 15:00:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:50:10.410 15:00:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:10.410 15:00:34 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:50:10.410 15:00:34 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:50:10.410 15:00:34 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:50:10.410 15:00:34 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:50:10.410 15:00:34 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:50:10.410 15:00:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:50:10.410 15:00:34 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:50:10.410 15:00:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:50:10.410 15:00:34 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:50:10.410 15:00:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:50:10.671 [2024-10-07 15:00:34.272375] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:50:10.671 [2024-10-07 15:00:34.272841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (107): Transport endpoint is not connected 00:50:10.671 [2024-10-07 15:00:34.273823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:50:10.671 [2024-10-07 15:00:34.274820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:50:10.671 [2024-10-07 15:00:34.274837] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:50:10.671 [2024-10-07 15:00:34.274851] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:50:10.671 [2024-10-07 15:00:34.274861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:50:10.671 request: 00:50:10.671 { 00:50:10.671 "name": "nvme0", 00:50:10.671 "trtype": "tcp", 00:50:10.671 "traddr": "127.0.0.1", 00:50:10.671 "adrfam": "ipv4", 00:50:10.671 "trsvcid": "4420", 00:50:10.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:50:10.671 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:50:10.671 "prchk_reftag": false, 00:50:10.671 "prchk_guard": false, 00:50:10.671 "hdgst": false, 00:50:10.671 "ddgst": false, 00:50:10.671 "psk": "key1", 00:50:10.671 "allow_unrecognized_csi": false, 00:50:10.671 "method": "bdev_nvme_attach_controller", 00:50:10.671 "req_id": 1 00:50:10.671 } 00:50:10.671 Got JSON-RPC error response 00:50:10.671 response: 00:50:10.671 { 00:50:10.671 "code": -5, 00:50:10.671 "message": "Input/output error" 00:50:10.671 } 00:50:10.671 15:00:34 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:50:10.671 15:00:34 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:50:10.671 15:00:34 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:50:10.671 15:00:34 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:50:10.671 15:00:34 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:50:10.671 15:00:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:50:10.671 15:00:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:10.671 15:00:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:10.671 15:00:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:50:10.671 15:00:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:10.932 15:00:34 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:50:10.932 15:00:34 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:50:10.932 15:00:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:50:10.932 15:00:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:10.932 15:00:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:10.932 15:00:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:10.932 15:00:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:50:11.193 15:00:34 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:50:11.193 15:00:34 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:50:11.193 15:00:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:50:11.193 15:00:34 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:50:11.193 15:00:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:50:11.453 15:00:34 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:50:11.453 15:00:34 keyring_file -- keyring/file.sh@78 -- # jq length 00:50:11.453 15:00:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:11.714 15:00:35 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:50:11.714 15:00:35 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.bcxaG4PE9N 00:50:11.714 15:00:35 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.bcxaG4PE9N 00:50:11.714 15:00:35 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:50:11.714 15:00:35 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.bcxaG4PE9N 00:50:11.714 15:00:35 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:50:11.714 15:00:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:50:11.714 15:00:35 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:50:11.714 15:00:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:50:11.714 15:00:35 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bcxaG4PE9N 00:50:11.714 15:00:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bcxaG4PE9N 00:50:11.714 [2024-10-07 15:00:35.325060] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.bcxaG4PE9N': 0100660 00:50:11.714 [2024-10-07 15:00:35.325089] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:50:11.714 request: 00:50:11.714 { 00:50:11.714 "name": "key0", 00:50:11.714 "path": "/tmp/tmp.bcxaG4PE9N", 00:50:11.714 "method": "keyring_file_add_key", 00:50:11.714 "req_id": 1 00:50:11.714 } 00:50:11.714 Got JSON-RPC error response 00:50:11.714 response: 00:50:11.714 { 00:50:11.714 "code": -1, 00:50:11.714 "message": "Operation not permitted" 00:50:11.714 } 00:50:11.714 15:00:35 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:50:11.714 15:00:35 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:50:11.714 15:00:35 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:50:11.714 15:00:35 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:50:11.714 15:00:35 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.bcxaG4PE9N 00:50:11.714 15:00:35 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bcxaG4PE9N 00:50:11.714 15:00:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bcxaG4PE9N 00:50:11.974 15:00:35 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.bcxaG4PE9N 00:50:11.974 15:00:35 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:50:11.974 15:00:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:50:11.974 15:00:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:11.974 15:00:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:11.974 15:00:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:11.974 15:00:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:50:12.235 15:00:35 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:50:12.235 15:00:35 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:50:12.235 15:00:35 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:50:12.235 15:00:35 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:50:12.235 15:00:35 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:50:12.235 15:00:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:50:12.235 15:00:35 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:50:12.235 15:00:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:50:12.235 15:00:35 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:50:12.235 15:00:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:50:12.235 [2024-10-07 15:00:35.850427] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.bcxaG4PE9N': No such file or directory 00:50:12.235 [2024-10-07 15:00:35.850455] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:50:12.235 [2024-10-07 15:00:35.850472] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:50:12.235 [2024-10-07 15:00:35.850484] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:50:12.235 [2024-10-07 15:00:35.850493] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:50:12.235 [2024-10-07 15:00:35.850501] bdev_nvme.c:6449:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:50:12.235 request: 00:50:12.235 { 00:50:12.235 "name": "nvme0", 00:50:12.235 "trtype": "tcp", 00:50:12.235 "traddr": "127.0.0.1", 00:50:12.235 "adrfam": "ipv4", 00:50:12.235 "trsvcid": "4420", 00:50:12.235 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:50:12.235 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:50:12.235 "prchk_reftag": false, 00:50:12.235 "prchk_guard": false, 00:50:12.235 "hdgst": false, 00:50:12.235 "ddgst": false, 00:50:12.235 "psk": "key0", 00:50:12.235 "allow_unrecognized_csi": false, 00:50:12.235 "method": "bdev_nvme_attach_controller", 00:50:12.235 "req_id": 1 00:50:12.235 } 00:50:12.235 Got JSON-RPC error response 00:50:12.235 response: 00:50:12.235 { 00:50:12.235 "code": -19, 00:50:12.235 "message": "No such device" 00:50:12.235 } 00:50:12.235 15:00:35 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:50:12.235 15:00:35 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:50:12.235 15:00:35 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:50:12.235 15:00:35 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:50:12.235 15:00:35 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:50:12.235 15:00:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:50:12.496 15:00:36 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:50:12.496 15:00:36 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:50:12.496 15:00:36 keyring_file -- keyring/common.sh@17 -- # name=key0 00:50:12.496 15:00:36 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:50:12.496 15:00:36 keyring_file -- keyring/common.sh@17 -- # digest=0 00:50:12.496 15:00:36 keyring_file -- keyring/common.sh@18 -- # mktemp 00:50:12.496 15:00:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.o4CFrwb8p6 00:50:12.496 15:00:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:50:12.496 15:00:36 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:50:12.496 15:00:36 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:50:12.496 15:00:36 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:50:12.496 15:00:36 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:50:12.496 15:00:36 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:50:12.496 15:00:36 keyring_file -- nvmf/common.sh@731 -- # python - 00:50:12.496 15:00:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.o4CFrwb8p6 00:50:12.496 15:00:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.o4CFrwb8p6 00:50:12.496 15:00:36 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.o4CFrwb8p6 00:50:12.496 15:00:36 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.o4CFrwb8p6 00:50:12.496 15:00:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.o4CFrwb8p6 00:50:12.756 15:00:36 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:50:12.756 15:00:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:50:13.016 nvme0n1 00:50:13.016 15:00:36 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:50:13.016 15:00:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:50:13.016 15:00:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:13.016 15:00:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:13.016 15:00:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:50:13.016 15:00:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:13.016 15:00:36 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:50:13.016 15:00:36 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:50:13.016 15:00:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:50:13.275 15:00:36 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:50:13.275 15:00:36 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:50:13.275 15:00:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:13.275 15:00:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:13.275 15:00:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:50:13.534 15:00:36 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:50:13.534 15:00:36 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:50:13.534 15:00:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:50:13.534 15:00:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:13.534 15:00:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:13.534 15:00:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:13.534 15:00:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:50:13.534 15:00:37 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:50:13.534 15:00:37 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:50:13.534 15:00:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:50:13.795 15:00:37 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:50:13.795 15:00:37 keyring_file -- keyring/file.sh@105 -- # jq length 00:50:13.795 15:00:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:14.055 15:00:37 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:50:14.055 15:00:37 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.o4CFrwb8p6 00:50:14.055 15:00:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.o4CFrwb8p6 00:50:14.055 15:00:37 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.C67qDR9hWc 00:50:14.055 15:00:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.C67qDR9hWc 00:50:14.316 15:00:37 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:50:14.316 15:00:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:50:14.577 nvme0n1 00:50:14.577 15:00:38 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:50:14.577 15:00:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:50:14.841 15:00:38 keyring_file -- keyring/file.sh@113 -- # config='{ 00:50:14.841 "subsystems": [ 00:50:14.841 { 00:50:14.841 "subsystem": "keyring", 00:50:14.841 "config": [ 00:50:14.841 { 00:50:14.841 "method": "keyring_file_add_key", 00:50:14.841 "params": { 00:50:14.841 "name": "key0", 00:50:14.841 "path": "/tmp/tmp.o4CFrwb8p6" 00:50:14.841 } 00:50:14.841 }, 00:50:14.841 { 00:50:14.841 "method": "keyring_file_add_key", 00:50:14.841 "params": { 00:50:14.841 "name": "key1", 00:50:14.841 "path": "/tmp/tmp.C67qDR9hWc" 00:50:14.841 } 00:50:14.841 } 00:50:14.842 ] 00:50:14.842 }, 00:50:14.842 { 00:50:14.842 "subsystem": "iobuf", 00:50:14.842 "config": [ 00:50:14.842 { 00:50:14.842 "method": "iobuf_set_options", 00:50:14.842 "params": { 00:50:14.842 "small_pool_count": 8192, 00:50:14.842 "large_pool_count": 1024, 00:50:14.842 "small_bufsize": 8192, 00:50:14.842 "large_bufsize": 135168 00:50:14.842 } 00:50:14.842 } 00:50:14.842 ] 00:50:14.842 }, 00:50:14.842 { 00:50:14.842 "subsystem": "sock", 00:50:14.842 "config": [ 00:50:14.842 { 00:50:14.842 "method": "sock_set_default_impl", 00:50:14.842 "params": { 00:50:14.842 "impl_name": "posix" 00:50:14.842 } 00:50:14.842 }, 00:50:14.842 { 00:50:14.842 "method": "sock_impl_set_options", 00:50:14.842 "params": { 00:50:14.842 "impl_name": "ssl", 00:50:14.842 "recv_buf_size": 4096, 00:50:14.842 "send_buf_size": 4096, 00:50:14.842 "enable_recv_pipe": true, 00:50:14.842 "enable_quickack": false, 00:50:14.842 "enable_placement_id": 0, 00:50:14.842 "enable_zerocopy_send_server": true, 00:50:14.842 "enable_zerocopy_send_client": false, 00:50:14.842 "zerocopy_threshold": 0, 00:50:14.842 "tls_version": 0, 00:50:14.842 "enable_ktls": false 00:50:14.842 } 00:50:14.842 }, 00:50:14.842 { 00:50:14.842 "method": "sock_impl_set_options", 00:50:14.842 "params": { 00:50:14.842 "impl_name": "posix", 00:50:14.842 "recv_buf_size": 2097152, 00:50:14.842 "send_buf_size": 2097152, 00:50:14.842 "enable_recv_pipe": true, 00:50:14.842 "enable_quickack": false, 00:50:14.842 "enable_placement_id": 0, 00:50:14.842 "enable_zerocopy_send_server": true, 00:50:14.842 "enable_zerocopy_send_client": false, 00:50:14.842 "zerocopy_threshold": 0, 00:50:14.842 "tls_version": 0, 00:50:14.842 "enable_ktls": false 00:50:14.842 } 00:50:14.842 } 00:50:14.842 ] 00:50:14.842 }, 00:50:14.842 { 00:50:14.842 "subsystem": "vmd", 00:50:14.842 "config": [] 00:50:14.842 }, 00:50:14.842 { 00:50:14.842 "subsystem": "accel", 00:50:14.842 "config": [ 00:50:14.842 { 00:50:14.842 "method": "accel_set_options", 00:50:14.842 "params": { 00:50:14.842 "small_cache_size": 128, 00:50:14.842 "large_cache_size": 16, 00:50:14.842 "task_count": 2048, 00:50:14.842 "sequence_count": 2048, 00:50:14.842 "buf_count": 2048 00:50:14.842 } 00:50:14.842 } 00:50:14.842 ] 00:50:14.842 }, 00:50:14.842 { 00:50:14.842 "subsystem": "bdev", 00:50:14.842 "config": [ 00:50:14.842 { 00:50:14.842 "method": "bdev_set_options", 00:50:14.842 "params": { 00:50:14.842 "bdev_io_pool_size": 65535, 00:50:14.842 "bdev_io_cache_size": 256, 00:50:14.842 "bdev_auto_examine": true, 00:50:14.842 "iobuf_small_cache_size": 128, 00:50:14.842 "iobuf_large_cache_size": 16 00:50:14.842 } 00:50:14.842 }, 00:50:14.842 { 00:50:14.842 "method": "bdev_raid_set_options", 00:50:14.842 "params": { 00:50:14.842 "process_window_size_kb": 1024, 00:50:14.842 "process_max_bandwidth_mb_sec": 0 00:50:14.842 } 00:50:14.842 }, 00:50:14.842 { 00:50:14.842 "method": "bdev_iscsi_set_options", 00:50:14.842 "params": { 00:50:14.842 "timeout_sec": 30 00:50:14.842 } 00:50:14.842 }, 00:50:14.842 { 00:50:14.842 "method": "bdev_nvme_set_options", 00:50:14.842 "params": { 00:50:14.842 "action_on_timeout": "none", 00:50:14.842 "timeout_us": 0, 00:50:14.842 "timeout_admin_us": 0, 00:50:14.842 "keep_alive_timeout_ms": 10000, 00:50:14.842 "arbitration_burst": 0, 00:50:14.842 "low_priority_weight": 0, 00:50:14.842 "medium_priority_weight": 0, 00:50:14.842 "high_priority_weight": 0, 00:50:14.842 "nvme_adminq_poll_period_us": 10000, 00:50:14.842 "nvme_ioq_poll_period_us": 0, 00:50:14.842 "io_queue_requests": 512, 00:50:14.842 "delay_cmd_submit": true, 00:50:14.842 "transport_retry_count": 4, 00:50:14.842 "bdev_retry_count": 3, 00:50:14.842 "transport_ack_timeout": 0, 00:50:14.842 "ctrlr_loss_timeout_sec": 0, 00:50:14.843 "reconnect_delay_sec": 0, 00:50:14.843 "fast_io_fail_timeout_sec": 0, 00:50:14.843 "disable_auto_failback": false, 00:50:14.843 "generate_uuids": false, 00:50:14.843 "transport_tos": 0, 00:50:14.843 "nvme_error_stat": false, 00:50:14.843 "rdma_srq_size": 0, 00:50:14.843 "io_path_stat": false, 00:50:14.843 "allow_accel_sequence": false, 00:50:14.843 "rdma_max_cq_size": 0, 00:50:14.843 "rdma_cm_event_timeout_ms": 0, 00:50:14.843 "dhchap_digests": [ 00:50:14.843 "sha256", 00:50:14.843 "sha384", 00:50:14.843 "sha512" 00:50:14.843 ], 00:50:14.843 "dhchap_dhgroups": [ 00:50:14.843 "null", 00:50:14.843 "ffdhe2048", 00:50:14.843 "ffdhe3072", 00:50:14.843 "ffdhe4096", 00:50:14.843 "ffdhe6144", 00:50:14.843 "ffdhe8192" 00:50:14.843 ] 00:50:14.843 } 00:50:14.843 }, 00:50:14.843 { 00:50:14.843 "method": "bdev_nvme_attach_controller", 00:50:14.843 "params": { 00:50:14.843 "name": "nvme0", 00:50:14.843 "trtype": "TCP", 00:50:14.843 "adrfam": "IPv4", 00:50:14.843 "traddr": "127.0.0.1", 00:50:14.843 "trsvcid": "4420", 00:50:14.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:50:14.843 "prchk_reftag": false, 00:50:14.843 "prchk_guard": false, 00:50:14.843 "ctrlr_loss_timeout_sec": 0, 00:50:14.843 "reconnect_delay_sec": 0, 00:50:14.843 "fast_io_fail_timeout_sec": 0, 00:50:14.843 "psk": "key0", 00:50:14.843 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:50:14.843 "hdgst": false, 00:50:14.843 "ddgst": false 00:50:14.843 } 00:50:14.843 }, 00:50:14.843 { 00:50:14.843 "method": "bdev_nvme_set_hotplug", 00:50:14.843 "params": { 00:50:14.843 "period_us": 100000, 00:50:14.843 "enable": false 00:50:14.843 } 00:50:14.843 }, 00:50:14.843 { 00:50:14.843 "method": "bdev_wait_for_examine" 00:50:14.843 } 00:50:14.843 ] 00:50:14.843 }, 00:50:14.843 { 00:50:14.843 "subsystem": "nbd", 00:50:14.843 "config": [] 00:50:14.843 } 00:50:14.843 ] 00:50:14.843 }' 00:50:14.843 15:00:38 keyring_file -- keyring/file.sh@115 -- # killprocess 3456263 00:50:14.843 15:00:38 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3456263 ']' 00:50:14.843 15:00:38 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3456263 00:50:14.843 15:00:38 keyring_file -- common/autotest_common.sh@955 -- # uname 00:50:14.843 15:00:38 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:50:14.843 15:00:38 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3456263 00:50:14.843 15:00:38 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:50:14.843 15:00:38 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:50:14.843 15:00:38 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3456263' 00:50:14.843 killing process with pid 3456263 00:50:14.843 15:00:38 keyring_file -- common/autotest_common.sh@969 -- # kill 3456263 00:50:14.843 Received shutdown signal, test time was about 1.000000 seconds 00:50:14.843 00:50:14.843 Latency(us) 00:50:14.843 [2024-10-07T13:00:38.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:14.843 [2024-10-07T13:00:38.552Z] =================================================================================================================== 00:50:14.843 [2024-10-07T13:00:38.552Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:50:14.843 15:00:38 keyring_file -- common/autotest_common.sh@974 -- # wait 3456263 00:50:15.413 15:00:38 keyring_file -- keyring/file.sh@118 -- # bperfpid=3458079 00:50:15.413 15:00:38 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3458079 /var/tmp/bperf.sock 00:50:15.413 15:00:38 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3458079 ']' 00:50:15.413 15:00:38 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:50:15.413 15:00:38 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:50:15.413 15:00:38 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:50:15.413 15:00:38 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:50:15.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:50:15.413 15:00:38 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:50:15.413 15:00:38 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:50:15.413 "subsystems": [ 00:50:15.413 { 00:50:15.413 "subsystem": "keyring", 00:50:15.413 "config": [ 00:50:15.413 { 00:50:15.413 "method": "keyring_file_add_key", 00:50:15.413 "params": { 00:50:15.413 "name": "key0", 00:50:15.413 "path": "/tmp/tmp.o4CFrwb8p6" 00:50:15.413 } 00:50:15.413 }, 00:50:15.413 { 00:50:15.413 "method": "keyring_file_add_key", 00:50:15.413 "params": { 00:50:15.413 "name": "key1", 00:50:15.413 "path": "/tmp/tmp.C67qDR9hWc" 00:50:15.413 } 00:50:15.413 } 00:50:15.413 ] 00:50:15.413 }, 00:50:15.413 { 00:50:15.413 "subsystem": "iobuf", 00:50:15.413 "config": [ 00:50:15.413 { 00:50:15.413 "method": "iobuf_set_options", 00:50:15.413 "params": { 00:50:15.413 "small_pool_count": 8192, 00:50:15.413 "large_pool_count": 1024, 00:50:15.413 "small_bufsize": 8192, 00:50:15.413 "large_bufsize": 135168 00:50:15.413 } 00:50:15.413 } 00:50:15.413 ] 00:50:15.413 }, 00:50:15.413 { 00:50:15.413 "subsystem": "sock", 00:50:15.413 "config": [ 00:50:15.413 { 00:50:15.413 "method": "sock_set_default_impl", 00:50:15.413 "params": { 00:50:15.413 "impl_name": "posix" 00:50:15.413 } 00:50:15.413 }, 00:50:15.413 { 00:50:15.413 "method": "sock_impl_set_options", 00:50:15.413 "params": { 00:50:15.413 "impl_name": "ssl", 00:50:15.413 "recv_buf_size": 4096, 00:50:15.413 "send_buf_size": 4096, 00:50:15.413 "enable_recv_pipe": true, 00:50:15.413 "enable_quickack": false, 00:50:15.413 "enable_placement_id": 0, 00:50:15.413 "enable_zerocopy_send_server": true, 00:50:15.413 "enable_zerocopy_send_client": false, 00:50:15.413 "zerocopy_threshold": 0, 00:50:15.413 "tls_version": 0, 00:50:15.413 "enable_ktls": false 00:50:15.413 } 00:50:15.413 }, 00:50:15.413 { 00:50:15.413 "method": "sock_impl_set_options", 00:50:15.413 "params": { 00:50:15.413 "impl_name": "posix", 00:50:15.413 "recv_buf_size": 2097152, 00:50:15.413 "send_buf_size": 2097152, 00:50:15.413 "enable_recv_pipe": true, 00:50:15.413 "enable_quickack": false, 00:50:15.413 "enable_placement_id": 0, 00:50:15.413 "enable_zerocopy_send_server": true, 00:50:15.413 "enable_zerocopy_send_client": false, 00:50:15.413 "zerocopy_threshold": 0, 00:50:15.413 "tls_version": 0, 00:50:15.413 "enable_ktls": false 00:50:15.413 } 00:50:15.413 } 00:50:15.413 ] 00:50:15.413 }, 00:50:15.413 { 00:50:15.413 "subsystem": "vmd", 00:50:15.413 "config": [] 00:50:15.413 }, 00:50:15.413 { 00:50:15.413 "subsystem": "accel", 00:50:15.413 "config": [ 00:50:15.413 { 00:50:15.413 "method": "accel_set_options", 00:50:15.413 "params": { 00:50:15.413 "small_cache_size": 128, 00:50:15.413 "large_cache_size": 16, 00:50:15.413 "task_count": 2048, 00:50:15.413 "sequence_count": 2048, 00:50:15.413 "buf_count": 2048 00:50:15.413 } 00:50:15.413 } 00:50:15.413 ] 00:50:15.413 }, 00:50:15.413 { 00:50:15.413 "subsystem": "bdev", 00:50:15.413 "config": [ 00:50:15.413 { 00:50:15.413 "method": "bdev_set_options", 00:50:15.413 "params": { 00:50:15.413 "bdev_io_pool_size": 65535, 00:50:15.413 "bdev_io_cache_size": 256, 00:50:15.413 "bdev_auto_examine": true, 00:50:15.413 "iobuf_small_cache_size": 128, 00:50:15.413 "iobuf_large_cache_size": 16 00:50:15.413 } 00:50:15.413 }, 00:50:15.413 { 00:50:15.413 "method": "bdev_raid_set_options", 00:50:15.413 "params": { 00:50:15.413 "process_window_size_kb": 1024, 00:50:15.413 "process_max_bandwidth_mb_sec": 0 00:50:15.413 } 00:50:15.413 }, 00:50:15.413 { 00:50:15.413 "method": "bdev_iscsi_set_options", 00:50:15.413 "params": { 00:50:15.413 "timeout_sec": 30 00:50:15.413 } 00:50:15.413 }, 00:50:15.413 { 00:50:15.413 "method": "bdev_nvme_set_options", 00:50:15.413 "params": { 00:50:15.413 "action_on_timeout": "none", 00:50:15.413 "timeout_us": 0, 00:50:15.413 "timeout_admin_us": 0, 00:50:15.413 "keep_alive_timeout_ms": 10000, 00:50:15.413 "arbitration_burst": 0, 00:50:15.413 "low_priority_weight": 0, 00:50:15.413 "medium_priority_weight": 0, 00:50:15.413 "high_priority_weight": 0, 00:50:15.413 "nvme_adminq_poll_period_us": 10000, 00:50:15.413 "nvme_ioq_poll_period_us": 0, 00:50:15.413 "io_queue_requests": 512, 00:50:15.413 "delay_cmd_submit": true, 00:50:15.413 "transport_retry_count": 4, 00:50:15.413 "bdev_retry_count": 3, 00:50:15.413 "transport_ack_timeout": 0, 00:50:15.413 "ctrlr_loss_timeout_sec": 0, 00:50:15.413 "reconnect_delay_sec": 0, 00:50:15.414 "fast_io_fail_timeout_sec": 0, 00:50:15.414 "disable_auto_failback": false, 00:50:15.414 "generate_uuids": false, 00:50:15.414 "transport_tos": 0, 00:50:15.414 "nvme_error_stat": false, 00:50:15.414 "rdma_srq_size": 0, 00:50:15.414 "io_path_stat": false, 00:50:15.414 "allow_accel_sequence": false, 00:50:15.414 "rdma_max_cq_size": 0, 00:50:15.414 "rdma_cm_event_timeout_ms": 0, 00:50:15.414 "dhchap_digests": [ 00:50:15.414 "sha256", 00:50:15.414 "sha384", 00:50:15.414 "sha512" 00:50:15.414 ], 00:50:15.414 "dhchap_dhgroups": [ 00:50:15.414 "null", 00:50:15.414 "ffdhe2048", 00:50:15.414 "ffdhe3072", 00:50:15.414 "ffdhe4096", 00:50:15.414 "ffdhe6144", 00:50:15.414 "ffdhe8192" 00:50:15.414 ] 00:50:15.414 } 00:50:15.414 }, 00:50:15.414 { 00:50:15.414 "method": "bdev_nvme_attach_controller", 00:50:15.414 "params": { 00:50:15.414 "name": "nvme0", 00:50:15.414 "trtype": "TCP", 00:50:15.414 "adrfam": "IPv4", 00:50:15.414 "traddr": "127.0.0.1", 00:50:15.414 "trsvcid": "4420", 00:50:15.414 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:50:15.414 "prchk_reftag": false, 00:50:15.414 "prchk_guard": false, 00:50:15.414 "ctrlr_loss_timeout_sec": 0, 00:50:15.414 "reconnect_delay_sec": 0, 00:50:15.414 "fast_io_fail_timeout_sec": 0, 00:50:15.414 "psk": "key0", 00:50:15.414 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:50:15.414 "hdgst": false, 00:50:15.414 "ddgst": false 00:50:15.414 } 00:50:15.414 }, 00:50:15.414 { 00:50:15.414 "method": "bdev_nvme_set_hotplug", 00:50:15.414 "params": { 00:50:15.414 "period_us": 100000, 00:50:15.414 "enable": false 00:50:15.414 } 00:50:15.414 }, 00:50:15.414 { 00:50:15.414 "method": "bdev_wait_for_examine" 00:50:15.414 } 00:50:15.414 ] 00:50:15.414 }, 00:50:15.414 { 00:50:15.414 "subsystem": "nbd", 00:50:15.414 "config": [] 00:50:15.414 } 00:50:15.414 ] 00:50:15.414 }' 00:50:15.414 15:00:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:50:15.414 [2024-10-07 15:00:39.003965] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:50:15.414 [2024-10-07 15:00:39.004080] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3458079 ] 00:50:15.673 [2024-10-07 15:00:39.126838] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:15.673 [2024-10-07 15:00:39.263107] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:50:15.933 [2024-10-07 15:00:39.529791] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:50:16.194 15:00:39 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:50:16.194 15:00:39 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:50:16.194 15:00:39 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:50:16.194 15:00:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:16.194 15:00:39 keyring_file -- keyring/file.sh@121 -- # jq length 00:50:16.453 15:00:39 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:50:16.453 15:00:39 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:50:16.453 15:00:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:50:16.453 15:00:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:16.453 15:00:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:16.453 15:00:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:50:16.453 15:00:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:16.453 15:00:40 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:50:16.453 15:00:40 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:50:16.453 15:00:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:50:16.453 15:00:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:50:16.453 15:00:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:16.453 15:00:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:50:16.453 15:00:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:16.713 15:00:40 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:50:16.713 15:00:40 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:50:16.713 15:00:40 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:50:16.713 15:00:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:50:16.973 15:00:40 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:50:16.973 15:00:40 keyring_file -- keyring/file.sh@1 -- # cleanup 00:50:16.973 15:00:40 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.o4CFrwb8p6 /tmp/tmp.C67qDR9hWc 00:50:16.973 15:00:40 keyring_file -- keyring/file.sh@20 -- # killprocess 3458079 00:50:16.973 15:00:40 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3458079 ']' 00:50:16.973 15:00:40 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3458079 00:50:16.973 15:00:40 keyring_file -- common/autotest_common.sh@955 -- # uname 00:50:16.973 15:00:40 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:50:16.973 15:00:40 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3458079 00:50:16.973 15:00:40 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:50:16.973 15:00:40 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:50:16.973 15:00:40 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3458079' 00:50:16.973 killing process with pid 3458079 00:50:16.973 15:00:40 keyring_file -- common/autotest_common.sh@969 -- # kill 3458079 00:50:16.973 Received shutdown signal, test time was about 1.000000 seconds 00:50:16.973 00:50:16.973 Latency(us) 00:50:16.973 [2024-10-07T13:00:40.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:16.973 [2024-10-07T13:00:40.682Z] =================================================================================================================== 00:50:16.973 [2024-10-07T13:00:40.682Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:50:16.973 15:00:40 keyring_file -- common/autotest_common.sh@974 -- # wait 3458079 00:50:17.543 15:00:41 keyring_file -- keyring/file.sh@21 -- # killprocess 3456105 00:50:17.543 15:00:41 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3456105 ']' 00:50:17.543 15:00:41 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3456105 00:50:17.543 15:00:41 keyring_file -- common/autotest_common.sh@955 -- # uname 00:50:17.543 15:00:41 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:50:17.543 15:00:41 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3456105 00:50:17.543 15:00:41 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:50:17.543 15:00:41 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:50:17.543 15:00:41 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3456105' 00:50:17.543 killing process with pid 3456105 00:50:17.543 15:00:41 keyring_file -- common/autotest_common.sh@969 -- # kill 3456105 00:50:17.543 15:00:41 keyring_file -- common/autotest_common.sh@974 -- # wait 3456105 00:50:19.453 00:50:19.453 real 0m14.342s 00:50:19.453 user 0m31.371s 00:50:19.453 sys 0m2.917s 00:50:19.453 15:00:42 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:50:19.453 15:00:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:50:19.453 ************************************ 00:50:19.453 END TEST keyring_file 00:50:19.453 ************************************ 00:50:19.453 15:00:42 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:50:19.453 15:00:42 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:50:19.453 15:00:42 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:50:19.453 15:00:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:50:19.453 15:00:42 -- common/autotest_common.sh@10 -- # set +x 00:50:19.453 ************************************ 00:50:19.453 START TEST keyring_linux 00:50:19.453 ************************************ 00:50:19.453 15:00:42 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:50:19.453 Joined session keyring: 1065220984 00:50:19.453 * Looking for test storage... 00:50:19.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:50:19.453 15:00:43 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:50:19.453 15:00:43 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:50:19.453 15:00:43 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:50:19.453 15:00:43 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@345 -- # : 1 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:50:19.453 15:00:43 keyring_linux -- scripts/common.sh@368 -- # return 0 00:50:19.453 15:00:43 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:50:19.453 15:00:43 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:50:19.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:19.453 --rc genhtml_branch_coverage=1 00:50:19.453 --rc genhtml_function_coverage=1 00:50:19.453 --rc genhtml_legend=1 00:50:19.453 --rc geninfo_all_blocks=1 00:50:19.453 --rc geninfo_unexecuted_blocks=1 00:50:19.453 00:50:19.453 ' 00:50:19.453 15:00:43 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:50:19.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:19.453 --rc genhtml_branch_coverage=1 00:50:19.453 --rc genhtml_function_coverage=1 00:50:19.453 --rc genhtml_legend=1 00:50:19.453 --rc geninfo_all_blocks=1 00:50:19.453 --rc geninfo_unexecuted_blocks=1 00:50:19.453 00:50:19.453 ' 00:50:19.453 15:00:43 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:50:19.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:19.453 --rc genhtml_branch_coverage=1 00:50:19.453 --rc genhtml_function_coverage=1 00:50:19.453 --rc genhtml_legend=1 00:50:19.453 --rc geninfo_all_blocks=1 00:50:19.453 --rc geninfo_unexecuted_blocks=1 00:50:19.453 00:50:19.453 ' 00:50:19.453 15:00:43 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:50:19.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:50:19.453 --rc genhtml_branch_coverage=1 00:50:19.453 --rc genhtml_function_coverage=1 00:50:19.453 --rc genhtml_legend=1 00:50:19.453 --rc geninfo_all_blocks=1 00:50:19.453 --rc geninfo_unexecuted_blocks=1 00:50:19.453 00:50:19.453 ' 00:50:19.453 15:00:43 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:50:19.453 15:00:43 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:50:19.453 15:00:43 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:50:19.453 15:00:43 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:19.453 15:00:43 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:19.453 15:00:43 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:19.453 15:00:43 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:19.453 15:00:43 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:19.453 15:00:43 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:50:19.454 15:00:43 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:50:19.454 15:00:43 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:19.454 15:00:43 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:19.454 15:00:43 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:19.454 15:00:43 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:19.454 15:00:43 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:19.454 15:00:43 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:19.454 15:00:43 keyring_linux -- paths/export.sh@5 -- # export PATH 00:50:19.454 15:00:43 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:50:19.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:50:19.454 15:00:43 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:50:19.454 15:00:43 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:50:19.454 15:00:43 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:50:19.454 15:00:43 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:50:19.454 15:00:43 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:50:19.454 15:00:43 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:50:19.454 15:00:43 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:50:19.454 15:00:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:50:19.454 15:00:43 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:50:19.454 15:00:43 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:50:19.454 15:00:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:50:19.454 15:00:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:50:19.454 15:00:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:50:19.454 15:00:43 keyring_linux -- nvmf/common.sh@731 -- # python - 00:50:19.716 15:00:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:50:19.716 15:00:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:50:19.716 /tmp/:spdk-test:key0 00:50:19.716 15:00:43 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:50:19.716 15:00:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:50:19.716 15:00:43 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:50:19.716 15:00:43 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:50:19.716 15:00:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:50:19.716 15:00:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:50:19.716 15:00:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:50:19.716 15:00:43 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:50:19.716 15:00:43 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:50:19.716 15:00:43 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:50:19.716 15:00:43 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:50:19.716 15:00:43 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:50:19.716 15:00:43 keyring_linux -- nvmf/common.sh@731 -- # python - 00:50:19.716 15:00:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:50:19.716 15:00:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:50:19.716 /tmp/:spdk-test:key1 00:50:19.716 15:00:43 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3458917 00:50:19.716 15:00:43 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3458917 00:50:19.716 15:00:43 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:50:19.716 15:00:43 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3458917 ']' 00:50:19.716 15:00:43 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:19.716 15:00:43 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:50:19.716 15:00:43 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:19.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:19.716 15:00:43 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:50:19.716 15:00:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:50:19.716 [2024-10-07 15:00:43.330788] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:50:19.716 [2024-10-07 15:00:43.330933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3458917 ] 00:50:19.977 [2024-10-07 15:00:43.460990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:19.977 [2024-10-07 15:00:43.643844] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:50:20.917 15:00:44 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:50:20.917 15:00:44 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:50:20.917 15:00:44 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:50:20.917 15:00:44 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:50:20.917 15:00:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:50:20.917 [2024-10-07 15:00:44.283793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:20.917 null0 00:50:20.917 [2024-10-07 15:00:44.315853] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:50:20.917 [2024-10-07 15:00:44.316314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:50:20.917 15:00:44 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:50:20.917 15:00:44 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:50:20.917 540182422 00:50:20.917 15:00:44 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:50:20.917 710549571 00:50:20.917 15:00:44 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3459191 00:50:20.917 15:00:44 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3459191 /var/tmp/bperf.sock 00:50:20.917 15:00:44 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:50:20.917 15:00:44 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3459191 ']' 00:50:20.917 15:00:44 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:50:20.917 15:00:44 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:50:20.917 15:00:44 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:50:20.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:50:20.917 15:00:44 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:50:20.917 15:00:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:50:20.917 [2024-10-07 15:00:44.419662] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:50:20.917 [2024-10-07 15:00:44.419772] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3459191 ] 00:50:20.917 [2024-10-07 15:00:44.544044] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:21.178 [2024-10-07 15:00:44.681587] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:50:21.749 15:00:45 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:50:21.749 15:00:45 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:50:21.749 15:00:45 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:50:21.749 15:00:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:50:21.749 15:00:45 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:50:21.749 15:00:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:50:22.010 15:00:45 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:50:22.010 15:00:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:50:22.270 [2024-10-07 15:00:45.836960] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:50:22.270 nvme0n1 00:50:22.270 15:00:45 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:50:22.270 15:00:45 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:50:22.270 15:00:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:50:22.270 15:00:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:50:22.270 15:00:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:50:22.270 15:00:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:22.531 15:00:46 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:50:22.531 15:00:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:50:22.531 15:00:46 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:50:22.531 15:00:46 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:50:22.531 15:00:46 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:50:22.531 15:00:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:22.531 15:00:46 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:50:22.791 15:00:46 keyring_linux -- keyring/linux.sh@25 -- # sn=540182422 00:50:22.791 15:00:46 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:50:22.791 15:00:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:50:22.791 15:00:46 keyring_linux -- keyring/linux.sh@26 -- # [[ 540182422 == \5\4\0\1\8\2\4\2\2 ]] 00:50:22.791 15:00:46 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 540182422 00:50:22.791 15:00:46 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:50:22.791 15:00:46 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:50:22.791 Running I/O for 1 seconds... 00:50:23.992 4756.00 IOPS, 18.58 MiB/s 00:50:23.992 Latency(us) 00:50:23.992 [2024-10-07T13:00:47.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:23.992 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:50:23.992 nvme0n1 : 1.08 4526.02 17.68 0.00 0.00 28176.29 4860.59 88255.15 00:50:23.992 [2024-10-07T13:00:47.701Z] =================================================================================================================== 00:50:23.992 [2024-10-07T13:00:47.701Z] Total : 4526.02 17.68 0.00 0.00 28176.29 4860.59 88255.15 00:50:23.992 { 00:50:23.992 "results": [ 00:50:23.992 { 00:50:23.992 "job": "nvme0n1", 00:50:23.992 "core_mask": "0x2", 00:50:23.992 "workload": "randread", 00:50:23.992 "status": "finished", 00:50:23.992 "queue_depth": 128, 00:50:23.992 "io_size": 4096, 00:50:23.992 "runtime": 1.079093, 00:50:23.992 "iops": 4526.023243594389, 00:50:23.992 "mibps": 17.679778295290582, 00:50:23.992 "io_failed": 0, 00:50:23.993 "io_timeout": 0, 00:50:23.993 "avg_latency_us": 28176.292306852312, 00:50:23.993 "min_latency_us": 4860.586666666667, 00:50:23.993 "max_latency_us": 88255.14666666667 00:50:23.993 } 00:50:23.993 ], 00:50:23.993 "core_count": 1 00:50:23.993 } 00:50:23.993 15:00:47 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:50:23.993 15:00:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:50:23.993 15:00:47 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:50:23.993 15:00:47 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:50:23.993 15:00:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:50:23.993 15:00:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:50:23.993 15:00:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:50:23.993 15:00:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:50:24.253 15:00:47 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:50:24.253 15:00:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:50:24.253 15:00:47 keyring_linux -- keyring/linux.sh@23 -- # return 00:50:24.253 15:00:47 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:50:24.253 15:00:47 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:50:24.253 15:00:47 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:50:24.254 15:00:47 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:50:24.254 15:00:47 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:50:24.254 15:00:47 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:50:24.254 15:00:47 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:50:24.254 15:00:47 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:50:24.254 15:00:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:50:24.515 [2024-10-07 15:00:47.997669] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:50:24.515 [2024-10-07 15:00:47.997884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (107): Transport endpoint is not connected 00:50:24.515 [2024-10-07 15:00:47.998869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a3700 (9): Bad file descriptor 00:50:24.515 [2024-10-07 15:00:47.999867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:50:24.515 [2024-10-07 15:00:47.999881] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:50:24.515 [2024-10-07 15:00:47.999891] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:50:24.515 [2024-10-07 15:00:47.999901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:50:24.515 request: 00:50:24.515 { 00:50:24.515 "name": "nvme0", 00:50:24.515 "trtype": "tcp", 00:50:24.515 "traddr": "127.0.0.1", 00:50:24.515 "adrfam": "ipv4", 00:50:24.515 "trsvcid": "4420", 00:50:24.515 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:50:24.515 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:50:24.515 "prchk_reftag": false, 00:50:24.515 "prchk_guard": false, 00:50:24.515 "hdgst": false, 00:50:24.515 "ddgst": false, 00:50:24.515 "psk": ":spdk-test:key1", 00:50:24.515 "allow_unrecognized_csi": false, 00:50:24.515 "method": "bdev_nvme_attach_controller", 00:50:24.515 "req_id": 1 00:50:24.515 } 00:50:24.515 Got JSON-RPC error response 00:50:24.515 response: 00:50:24.515 { 00:50:24.515 "code": -5, 00:50:24.515 "message": "Input/output error" 00:50:24.515 } 00:50:24.515 15:00:48 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:50:24.515 15:00:48 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:50:24.515 15:00:48 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:50:24.515 15:00:48 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:50:24.515 15:00:48 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:50:24.515 15:00:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:50:24.515 15:00:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:50:24.515 15:00:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:50:24.515 15:00:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:50:24.515 15:00:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:50:24.515 15:00:48 keyring_linux -- keyring/linux.sh@33 -- # sn=540182422 00:50:24.515 15:00:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 540182422 00:50:24.515 1 links removed 00:50:24.515 15:00:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:50:24.515 15:00:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:50:24.515 15:00:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:50:24.515 15:00:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:50:24.515 15:00:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:50:24.515 15:00:48 keyring_linux -- keyring/linux.sh@33 -- # sn=710549571 00:50:24.515 15:00:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 710549571 00:50:24.515 1 links removed 00:50:24.515 15:00:48 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3459191 00:50:24.515 15:00:48 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3459191 ']' 00:50:24.515 15:00:48 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3459191 00:50:24.515 15:00:48 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:50:24.515 15:00:48 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:50:24.515 15:00:48 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3459191 00:50:24.515 15:00:48 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:50:24.515 15:00:48 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:50:24.515 15:00:48 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3459191' 00:50:24.515 killing process with pid 3459191 00:50:24.515 15:00:48 keyring_linux -- common/autotest_common.sh@969 -- # kill 3459191 00:50:24.515 Received shutdown signal, test time was about 1.000000 seconds 00:50:24.515 00:50:24.515 Latency(us) 00:50:24.515 [2024-10-07T13:00:48.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:24.515 [2024-10-07T13:00:48.224Z] =================================================================================================================== 00:50:24.515 [2024-10-07T13:00:48.224Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:50:24.515 15:00:48 keyring_linux -- common/autotest_common.sh@974 -- # wait 3459191 00:50:25.086 15:00:48 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3458917 00:50:25.086 15:00:48 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3458917 ']' 00:50:25.086 15:00:48 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3458917 00:50:25.086 15:00:48 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:50:25.086 15:00:48 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:50:25.086 15:00:48 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3458917 00:50:25.086 15:00:48 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:50:25.086 15:00:48 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:50:25.086 15:00:48 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3458917' 00:50:25.086 killing process with pid 3458917 00:50:25.086 15:00:48 keyring_linux -- common/autotest_common.sh@969 -- # kill 3458917 00:50:25.086 15:00:48 keyring_linux -- common/autotest_common.sh@974 -- # wait 3458917 00:50:27.000 00:50:27.000 real 0m7.498s 00:50:27.000 user 0m12.707s 00:50:27.000 sys 0m1.356s 00:50:27.000 15:00:50 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:50:27.000 15:00:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:50:27.000 ************************************ 00:50:27.000 END TEST keyring_linux 00:50:27.000 ************************************ 00:50:27.000 15:00:50 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:50:27.000 15:00:50 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:50:27.000 15:00:50 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:50:27.000 15:00:50 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:50:27.000 15:00:50 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:50:27.000 15:00:50 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:50:27.000 15:00:50 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:50:27.000 15:00:50 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:50:27.000 15:00:50 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:50:27.000 15:00:50 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:50:27.000 15:00:50 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:50:27.000 15:00:50 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:50:27.000 15:00:50 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:50:27.000 15:00:50 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:50:27.000 15:00:50 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:50:27.000 15:00:50 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:50:27.000 15:00:50 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:50:27.000 15:00:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:50:27.000 15:00:50 -- common/autotest_common.sh@10 -- # set +x 00:50:27.000 15:00:50 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:50:27.000 15:00:50 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:50:27.000 15:00:50 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:50:27.000 15:00:50 -- common/autotest_common.sh@10 -- # set +x 00:50:35.141 INFO: APP EXITING 00:50:35.141 INFO: killing all VMs 00:50:35.141 INFO: killing vhost app 00:50:35.141 INFO: EXIT DONE 00:50:37.685 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:50:37.685 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:50:37.685 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:50:37.685 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:50:37.685 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:50:37.685 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:50:37.685 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:50:37.685 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:50:37.685 0000:65:00.0 (144d a80a): Already using the nvme driver 00:50:37.685 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:50:37.685 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:50:37.685 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:50:37.685 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:50:37.685 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:50:37.946 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:50:37.946 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:50:37.946 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:50:42.151 Cleaning 00:50:42.151 Removing: /var/run/dpdk/spdk0/config 00:50:42.151 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:50:42.151 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:50:42.151 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:50:42.151 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:50:42.151 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:50:42.151 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:50:42.151 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:50:42.151 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:50:42.151 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:50:42.151 Removing: /var/run/dpdk/spdk0/hugepage_info 00:50:42.151 Removing: /var/run/dpdk/spdk1/config 00:50:42.151 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:50:42.151 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:50:42.151 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:50:42.151 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:50:42.151 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:50:42.151 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:50:42.151 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:50:42.151 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:50:42.151 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:50:42.151 Removing: /var/run/dpdk/spdk1/hugepage_info 00:50:42.151 Removing: /var/run/dpdk/spdk2/config 00:50:42.151 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:50:42.151 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:50:42.151 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:50:42.151 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:50:42.151 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:50:42.151 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:50:42.151 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:50:42.151 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:50:42.151 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:50:42.151 Removing: /var/run/dpdk/spdk2/hugepage_info 00:50:42.151 Removing: /var/run/dpdk/spdk3/config 00:50:42.151 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:50:42.151 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:50:42.151 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:50:42.151 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:50:42.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:50:42.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:50:42.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:50:42.152 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:50:42.152 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:50:42.152 Removing: /var/run/dpdk/spdk3/hugepage_info 00:50:42.152 Removing: /var/run/dpdk/spdk4/config 00:50:42.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:50:42.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:50:42.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:50:42.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:50:42.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:50:42.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:50:42.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:50:42.152 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:50:42.152 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:50:42.152 Removing: /var/run/dpdk/spdk4/hugepage_info 00:50:42.152 Removing: /dev/shm/bdev_svc_trace.1 00:50:42.152 Removing: /dev/shm/nvmf_trace.0 00:50:42.152 Removing: /dev/shm/spdk_tgt_trace.pid2761954 00:50:42.152 Removing: /var/run/dpdk/spdk0 00:50:42.152 Removing: /var/run/dpdk/spdk1 00:50:42.152 Removing: /var/run/dpdk/spdk2 00:50:42.152 Removing: /var/run/dpdk/spdk3 00:50:42.152 Removing: /var/run/dpdk/spdk4 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2759426 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2761954 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2763125 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2764504 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2765186 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2766591 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2766925 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2767722 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2768877 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2769862 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2770638 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2771324 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2772031 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2772670 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2773053 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2773428 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2773979 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2775237 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2779624 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2780334 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2781036 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2781369 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2783092 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2783123 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2784849 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2785156 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2785855 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2785904 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2786570 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2786728 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2787940 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2788311 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2788792 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2793707 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2799275 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2811374 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2812218 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2817723 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2818094 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2823600 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2831587 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2834711 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2848027 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2859515 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2861646 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2862929 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2885343 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2890618 00:50:42.152 Removing: /var/run/dpdk/spdk_pid2993620 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3000203 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3007561 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3018578 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3055194 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3060914 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3063020 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3065812 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3066170 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3066513 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3066860 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3067909 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3070239 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3071675 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3072403 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3075391 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3076327 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3077415 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3082664 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3089755 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3089756 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3089757 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3094530 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3099582 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3105462 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3150900 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3156498 00:50:42.152 Removing: /var/run/dpdk/spdk_pid3164012 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3165957 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3168141 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3170331 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3176228 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3181591 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3191150 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3191154 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3196509 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3196757 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3196967 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3197614 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3197621 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3198988 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3201098 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3203539 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3205499 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3207383 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3209281 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3216974 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3217700 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3218989 00:50:42.412 Removing: /var/run/dpdk/spdk_pid3220492 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3227212 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3230476 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3237320 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3244175 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3255038 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3263789 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3263794 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3287578 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3288405 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3289277 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3289969 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3291356 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3292128 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3293056 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3293749 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3299742 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3300102 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3307551 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3307899 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3314600 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3319881 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3331636 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3332311 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3337563 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3337959 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3343223 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3350814 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3353838 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3366604 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3377592 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3379759 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3380867 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3401427 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3406939 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3410455 00:50:42.413 Removing: /var/run/dpdk/spdk_pid3418066 00:50:42.673 Removing: /var/run/dpdk/spdk_pid3418213 00:50:42.673 Removing: /var/run/dpdk/spdk_pid3424253 00:50:42.673 Removing: /var/run/dpdk/spdk_pid3426778 00:50:42.673 Removing: /var/run/dpdk/spdk_pid3429296 00:50:42.673 Removing: /var/run/dpdk/spdk_pid3430753 00:50:42.673 Removing: /var/run/dpdk/spdk_pid3433328 00:50:42.673 Removing: /var/run/dpdk/spdk_pid3434847 00:50:42.673 Removing: /var/run/dpdk/spdk_pid3445220 00:50:42.673 Removing: /var/run/dpdk/spdk_pid3445711 00:50:42.673 Removing: /var/run/dpdk/spdk_pid3446279 00:50:42.673 Removing: /var/run/dpdk/spdk_pid3449847 00:50:42.673 Removing: /var/run/dpdk/spdk_pid3450640 00:50:42.673 Removing: /var/run/dpdk/spdk_pid3451318 00:50:42.673 Removing: /var/run/dpdk/spdk_pid3456105 00:50:42.673 Removing: /var/run/dpdk/spdk_pid3456263 00:50:42.673 Removing: /var/run/dpdk/spdk_pid3458079 00:50:42.673 Removing: /var/run/dpdk/spdk_pid3458917 00:50:42.673 Removing: /var/run/dpdk/spdk_pid3459191 00:50:42.673 Clean 00:50:42.673 15:01:06 -- common/autotest_common.sh@1451 -- # return 0 00:50:42.673 15:01:06 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:50:42.673 15:01:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:50:42.673 15:01:06 -- common/autotest_common.sh@10 -- # set +x 00:50:42.673 15:01:06 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:50:42.673 15:01:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:50:42.673 15:01:06 -- common/autotest_common.sh@10 -- # set +x 00:50:42.673 15:01:06 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:50:42.673 15:01:06 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:50:42.673 15:01:06 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:50:42.673 15:01:06 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:50:42.673 15:01:06 -- spdk/autotest.sh@394 -- # hostname 00:50:42.673 15:01:06 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:50:42.933 geninfo: WARNING: invalid characters removed from testname! 00:51:09.511 15:01:30 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:51:09.511 15:01:33 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:51:12.052 15:01:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:51:13.433 15:01:36 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:51:14.814 15:01:38 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:51:16.196 15:01:39 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:51:18.106 15:01:41 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:51:18.106 15:01:41 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:51:18.106 15:01:41 -- common/autotest_common.sh@1681 -- $ lcov --version 00:51:18.106 15:01:41 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:51:18.106 15:01:41 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:51:18.106 15:01:41 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:51:18.106 15:01:41 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:51:18.106 15:01:41 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:51:18.106 15:01:41 -- scripts/common.sh@336 -- $ IFS=.-: 00:51:18.106 15:01:41 -- scripts/common.sh@336 -- $ read -ra ver1 00:51:18.106 15:01:41 -- scripts/common.sh@337 -- $ IFS=.-: 00:51:18.106 15:01:41 -- scripts/common.sh@337 -- $ read -ra ver2 00:51:18.106 15:01:41 -- scripts/common.sh@338 -- $ local 'op=<' 00:51:18.106 15:01:41 -- scripts/common.sh@340 -- $ ver1_l=2 00:51:18.106 15:01:41 -- scripts/common.sh@341 -- $ ver2_l=1 00:51:18.106 15:01:41 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:51:18.106 15:01:41 -- scripts/common.sh@344 -- $ case "$op" in 00:51:18.106 15:01:41 -- scripts/common.sh@345 -- $ : 1 00:51:18.106 15:01:41 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:51:18.106 15:01:41 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:51:18.106 15:01:41 -- scripts/common.sh@365 -- $ decimal 1 00:51:18.106 15:01:41 -- scripts/common.sh@353 -- $ local d=1 00:51:18.106 15:01:41 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:51:18.106 15:01:41 -- scripts/common.sh@355 -- $ echo 1 00:51:18.106 15:01:41 -- scripts/common.sh@365 -- $ ver1[v]=1 00:51:18.106 15:01:41 -- scripts/common.sh@366 -- $ decimal 2 00:51:18.106 15:01:41 -- scripts/common.sh@353 -- $ local d=2 00:51:18.106 15:01:41 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:51:18.106 15:01:41 -- scripts/common.sh@355 -- $ echo 2 00:51:18.106 15:01:41 -- scripts/common.sh@366 -- $ ver2[v]=2 00:51:18.106 15:01:41 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:51:18.106 15:01:41 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:51:18.106 15:01:41 -- scripts/common.sh@368 -- $ return 0 00:51:18.106 15:01:41 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:51:18.106 15:01:41 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:51:18.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:18.106 --rc genhtml_branch_coverage=1 00:51:18.106 --rc genhtml_function_coverage=1 00:51:18.106 --rc genhtml_legend=1 00:51:18.106 --rc geninfo_all_blocks=1 00:51:18.106 --rc geninfo_unexecuted_blocks=1 00:51:18.106 00:51:18.106 ' 00:51:18.106 15:01:41 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:51:18.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:18.106 --rc genhtml_branch_coverage=1 00:51:18.106 --rc genhtml_function_coverage=1 00:51:18.106 --rc genhtml_legend=1 00:51:18.106 --rc geninfo_all_blocks=1 00:51:18.106 --rc geninfo_unexecuted_blocks=1 00:51:18.106 00:51:18.106 ' 00:51:18.106 15:01:41 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:51:18.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:18.106 --rc genhtml_branch_coverage=1 00:51:18.106 --rc genhtml_function_coverage=1 00:51:18.106 --rc genhtml_legend=1 00:51:18.106 --rc geninfo_all_blocks=1 00:51:18.106 --rc geninfo_unexecuted_blocks=1 00:51:18.106 00:51:18.106 ' 00:51:18.106 15:01:41 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:51:18.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:18.106 --rc genhtml_branch_coverage=1 00:51:18.106 --rc genhtml_function_coverage=1 00:51:18.106 --rc genhtml_legend=1 00:51:18.106 --rc geninfo_all_blocks=1 00:51:18.106 --rc geninfo_unexecuted_blocks=1 00:51:18.106 00:51:18.106 ' 00:51:18.106 15:01:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:51:18.106 15:01:41 -- scripts/common.sh@15 -- $ shopt -s extglob 00:51:18.106 15:01:41 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:51:18.106 15:01:41 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:51:18.106 15:01:41 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:51:18.106 15:01:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:18.106 15:01:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:18.106 15:01:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:18.106 15:01:41 -- paths/export.sh@5 -- $ export PATH 00:51:18.106 15:01:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:18.106 15:01:41 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:51:18.106 15:01:41 -- common/autobuild_common.sh@486 -- $ date +%s 00:51:18.106 15:01:41 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728306101.XXXXXX 00:51:18.106 15:01:41 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728306101.0vUOr9 00:51:18.106 15:01:41 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:51:18.106 15:01:41 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:51:18.106 15:01:41 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:51:18.106 15:01:41 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:51:18.106 15:01:41 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:51:18.106 15:01:41 -- common/autobuild_common.sh@502 -- $ get_config_params 00:51:18.106 15:01:41 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:51:18.106 15:01:41 -- common/autotest_common.sh@10 -- $ set +x 00:51:18.106 15:01:41 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:51:18.106 15:01:41 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:51:18.106 15:01:41 -- pm/common@17 -- $ local monitor 00:51:18.106 15:01:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:18.106 15:01:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:18.106 15:01:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:18.106 15:01:41 -- pm/common@21 -- $ date +%s 00:51:18.106 15:01:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:18.106 15:01:41 -- pm/common@25 -- $ sleep 1 00:51:18.106 15:01:41 -- pm/common@21 -- $ date +%s 00:51:18.106 15:01:41 -- pm/common@21 -- $ date +%s 00:51:18.106 15:01:41 -- pm/common@21 -- $ date +%s 00:51:18.106 15:01:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728306101 00:51:18.106 15:01:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728306101 00:51:18.106 15:01:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728306101 00:51:18.106 15:01:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728306101 00:51:18.107 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728306101_collect-cpu-load.pm.log 00:51:18.107 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728306101_collect-vmstat.pm.log 00:51:18.107 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728306101_collect-cpu-temp.pm.log 00:51:18.107 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728306101_collect-bmc-pm.bmc.pm.log 00:51:19.047 15:01:42 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:51:19.047 15:01:42 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:51:19.047 15:01:42 -- spdk/autopackage.sh@14 -- $ timing_finish 00:51:19.047 15:01:42 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:51:19.047 15:01:42 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:51:19.047 15:01:42 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:51:19.047 15:01:42 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:51:19.047 15:01:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:51:19.047 15:01:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:51:19.047 15:01:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:19.047 15:01:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:51:19.047 15:01:42 -- pm/common@44 -- $ pid=3473625 00:51:19.047 15:01:42 -- pm/common@50 -- $ kill -TERM 3473625 00:51:19.047 15:01:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:19.047 15:01:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:51:19.047 15:01:42 -- pm/common@44 -- $ pid=3473626 00:51:19.047 15:01:42 -- pm/common@50 -- $ kill -TERM 3473626 00:51:19.047 15:01:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:19.047 15:01:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:51:19.047 15:01:42 -- pm/common@44 -- $ pid=3473628 00:51:19.047 15:01:42 -- pm/common@50 -- $ kill -TERM 3473628 00:51:19.047 15:01:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:19.047 15:01:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:51:19.047 15:01:42 -- pm/common@44 -- $ pid=3473645 00:51:19.047 15:01:42 -- pm/common@50 -- $ sudo -E kill -TERM 3473645 00:51:19.047 + [[ -n 2674060 ]] 00:51:19.047 + sudo kill 2674060 00:51:19.058 [Pipeline] } 00:51:19.074 [Pipeline] // stage 00:51:19.085 [Pipeline] } 00:51:19.098 [Pipeline] // timeout 00:51:19.103 [Pipeline] } 00:51:19.116 [Pipeline] // catchError 00:51:19.120 [Pipeline] } 00:51:19.134 [Pipeline] // wrap 00:51:19.140 [Pipeline] } 00:51:19.153 [Pipeline] // catchError 00:51:19.162 [Pipeline] stage 00:51:19.164 [Pipeline] { (Epilogue) 00:51:19.175 [Pipeline] catchError 00:51:19.177 [Pipeline] { 00:51:19.188 [Pipeline] echo 00:51:19.189 Cleanup processes 00:51:19.195 [Pipeline] sh 00:51:19.484 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:51:19.484 3473762 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:51:19.484 3474324 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:51:19.497 [Pipeline] sh 00:51:19.785 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:51:19.785 ++ grep -v 'sudo pgrep' 00:51:19.785 ++ awk '{print $1}' 00:51:19.785 + sudo kill -9 3473762 00:51:19.797 [Pipeline] sh 00:51:20.082 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:51:32.315 [Pipeline] sh 00:51:32.602 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:51:32.602 Artifacts sizes are good 00:51:32.617 [Pipeline] archiveArtifacts 00:51:32.626 Archiving artifacts 00:51:32.827 [Pipeline] sh 00:51:33.166 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:51:33.213 [Pipeline] cleanWs 00:51:33.236 [WS-CLEANUP] Deleting project workspace... 00:51:33.236 [WS-CLEANUP] Deferred wipeout is used... 00:51:33.243 [WS-CLEANUP] done 00:51:33.245 [Pipeline] } 00:51:33.260 [Pipeline] // catchError 00:51:33.272 [Pipeline] sh 00:51:33.559 + logger -p user.info -t JENKINS-CI 00:51:33.568 [Pipeline] } 00:51:33.581 [Pipeline] // stage 00:51:33.585 [Pipeline] } 00:51:33.598 [Pipeline] // node 00:51:33.603 [Pipeline] End of Pipeline 00:51:33.639 Finished: SUCCESS